File size: 3,003 Bytes
b1c9a14
 
 
 
 
 
cc2fcdb
b1c9a14
cc2fcdb
b1c9a14
 
 
 
 
 
61e699a
b1c9a14
c58cec5
 
 
 
 
 
 
 
a0a9180
c58cec5
b18fe6e
c58cec5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b1c9a14
 
0d4b349
 
 
 
 
61e699a
 
 
b1c9a14
 
 
0d4b349
b1c9a14
 
 
0d4b349
 
b1c9a14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a21e9f8
 
 
 
 
 
c59769f
f3a68f3
c59769f
f3a68f3
 
 
c59769f
 
a21e9f8
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
- generated_from_trainer
datasets:
- AffectNet
model-index:
- name: paligemma_emotion_
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# FaceScanPaliGemma_Emotion


``` python

from PIL import Image
import torch
from transformers import PaliGemmaProcessor, PaliGemmaForConditionalGeneration, BitsAndBytesConfig, TrainingArguments, Trainer


model = PaliGemmaForConditionalGeneration.from_pretrained('NYUAD-ComNets/FaceScanPaliGemma_Emotion',torch_dtype=torch.bfloat16)

input_text = "what is the emotion of the person in the image?"

processor = PaliGemmaProcessor.from_pretrained("google/paligemma-3b-pt-224")

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model.to(device)


input_image = Image.open('image_path')
inputs = processor(text=input_text, images=input_image, padding="longest", do_convert_rgb=True, return_tensors="pt").to(device)
inputs = inputs.to(dtype=model.dtype)
      
with torch.no_grad():
          output = model.generate(**inputs, max_length=500)
result=processor.decode(output[0], skip_special_tokens=True)[len(input_text):].strip()


```


## Model description

This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the AffectNet dataset. 
The model aims to classify the  emotion of face image or image with one person into eight categoris such as 'neutral', 'happy', 'sad', 'surprise', 'fear', 'disgust',
'anger', 'contempt'


## Model Performance
Accuracy: 59.4 %,   F1 score: 59 %


## Intended uses & limitations

This model is used for research purposes

## Training and evaluation data

AffectNet dataset was used for training and validating the model


## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 5

### Training results



### Framework versions

- Transformers 4.42.4
- Pytorch 2.1.2+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1



# BibTeX entry and citation info

```

@article{aldahoul2024exploring,
  title={Exploring Vision Language Models for Facial Attribute Recognition: Emotion, Race, Gender, and Age},
  author={AlDahoul, Nouar and Tan, Myles Joshua Toledo and Kasireddy, Harishwar Reddy and Zaki, Yasir},
  journal={arXiv preprint arXiv:2410.24148},
  year={2024}
}

@misc{ComNets,
      url={https://huggingface.co/NYUAD-ComNets/FaceScanPaliGemma_Emotion](https://huggingface.co/NYUAD-ComNets/FaceScanPaliGemma_Emotion)},
      title={FaceScanPaliGemma_Emotion},
      author={Nouar AlDahoul, Yasir Zaki}
}