File size: 3,843 Bytes
9753285
12892f4
 
9753285
 
 
b275f0e
 
 
9753285
 
b275f0e
 
 
0417726
 
 
b275f0e
 
 
 
0417726
 
 
 
 
 
b275f0e
 
 
 
 
0417726
b275f0e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f40915c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b275f0e
0417726
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
library_name: py-feat
pipeline_tag: image-feature-extraction
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
license: mit
language:
- en
---

# Retinaface


## Model Description
This is a PyTorch implementation of [RetinaFace: Single-stage Dense Face Localisation in the Wild](RetinaFace: Single-stage Dense Face Localisation in the Wild) based on [biubug6's implementation](https://github.com/biubug6/Pytorch_Retinaface). The Retinaface model utilizes a deep convolutional neural network architecture with multiple layers. It uses `mobilenet0.25` as the backbone network (only 1.7M parameters) but can also use `resnet50` as the backbone to achieve better results, but with additional computational overhead. 
This model returns bounding box locations of each detected face, confidence scores in the face detection, as well as 10 facial landmark keystones.

- **License:** MIT
- **License Link:** [MIT License](https://github.com/biubug6/Pytorch_Retinaface/blob/master/LICENSE.MIT)

## Model Details:
- **Model Type**: Convolutional Neural Network (Mobilenet backbone)
- **Framework**: pytorch
- 
## Model Sources
- **Repository:** [Py-Feat](https://github.com/cosanlab/py-feat/tree/main/feat/face_detectors/Retinaface)
- **Paper:** [RetinaFace: Single-stage Dense Face Localisation in the Wild](https://arxiv.org/abs/1905.00641)

## Model Architecture

## Evaluation Results
The model was evaluated on the WIDER FACE dataset see the benchmark results in [biubug6 repository](https://github.com/biubug6/Pytorch_Retinaface)

## Citation
If you use the Retinaface model in your research or application, please cite the following paper:

```
@misc{deng2019retinafacesinglestagedenseface,
      title={RetinaFace: Single-stage Dense Face Localisation in the Wild}, 
      author={Jiankang Deng and Jia Guo and Yuxiang Zhou and Jinke Yu and Irene Kotsia and Stefanos Zafeiriou},
      year={2019},
      eprint={1905.00641},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/1905.00641}
}
```

## Example Useage

```python
import os
import torch
import json
from PIL import Image
from huggingface_hub import hf_hub_download
from feat.face_detectors.Retinaface.Retinaface_model import RetinaFace, postprocess_retinaface
from feat.utils.io import get_resource_path, get_test_data_path
from feat.utils.image_operations import convert_image_to_tensor, convert_color_vector_to_tensor

device = 'cpu'

# Download Model Weights and Config File
face_config_file = hf_hub_download(
    repo_id="py-feat/retinaface",
    filename="config.json",
    cache_dir=get_resource_path(),
)
with open(face_config_file, "r") as f:
    face_config = json.load(f)
    
face_model_file = hf_hub_download(repo_id='py-feat/retinaface', 
                                      filename="mobilenet0.25_Final.pth",
                                      cache_dir=get_resource_path())
face_checkpoint = torch.load(face_model_file, map_location=device, weights_only=True)
face_detector = RetinaFace(cfg=face_config, phase="test")                
face_detector.load_state_dict(face_checkpoint)
face_detector.eval()
face_detector.to(device)

# Run Inference
frame = Image.open(os.path.join(get_test_data_path(), "multi_face.jpg"))
single_frame = torch.sub(frame, convert_color_vector_to_tensor(np.array([123, 117, 104])))
predicted_locations, predicted_scores, predicted_landmarks = face_detector.forward(single_frame.to(device))
face_output = postprocess_retinaface(predicted_locations, predicted_scores, predicted_landmarks, face_config, single_frame, device=device)


```

## Acknowledgements
We thank the contributors and the open-source community for their valuable support in developing this model. Special thanks to the authors of the original Retinaface paper, the WIDER FACE dataset, and biubug6 for sharing weights and code.