File size: 2,469 Bytes
e2b51c6
 
 
 
406704c
c42f4c2
 
e2b51c6
 
406704c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
license: mit
library_name: py-feat
pipeline_tag: image-feature-extraction
---

# FaceNet

## Model Description
facenet uses an Inception Residual Masking Network pretrained on VGGFace2 to classify facial identities. Facenet also exposes a 512 latent facial embedding space. 

## Model Details
- **Model Type**: Convolutional Neural Network (CNN)
- **Architecture**: Inception Residual masking network. Output layer classifies facial identities. Also provides a 512 dimensional representation layer
- **Input Size**: 112 x 112 pixels
- **Framework**: PyTorch

## Model Sources
- **Repository**: [GitHub Repository](https://github.com/timesler/facenet-pytorch/tree/master)
- **Paper**: [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/abs/1503.03832)

## Citation
If you use this model in your research or application, please cite the following paper:

F. Schroff, D. Kalenichenko, J. Philbin. FaceNet: A Unified Embedding for Face Recognition and Clustering, arXiv:1503.03832, 2015.

```
@inproceedings{schroff2015facenet,
  title={Facenet: A unified embedding for face recognition and clustering},
  author={Schroff, Florian and Kalenichenko, Dmitry and Philbin, James},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={815--823},
  year={2015}
}
```

## Acknowledgements
We thank Tim Esler and David Sandberg for sharing their code and training weights with a permissive license. 

## Example Useage

```python
import numpy as np
import torch
import torch.nn as nn
from feat.identity_detectors.facenet.facenet_model import InceptionResnetV1
from huggingface_hub import hf_hub_download

device = 'cpu'
identity_detector = InceptionResnetV1(
            pretrained=None,
            classify=False,
            num_classes=None,
            dropout_prob=0.6,
            device=device,
        )
identity_detector.logits = nn.Linear(512, 8631)
identity_model_file = hf_hub_download(repo_id='py-feat/facenet', filename="facenet_20180402_114759_vggface2.pth")
identity_detector.load_state_dict(torch.load(identity_model_file, map_location=device))
identity_detector.eval()
identity_detector.to(device)

# Test model
face_image = "path/to/your/test_image.jpg"  # Replace with your extracted face image that is [224, 224]

# 512 dimensional Facial Embeddings
identity_embeddings = identity_detector.forward(extracted_faces)

```