Merge pull request #1224 from serengil/feat-task-0104-benchmarks

Feat task 0104 benchmarks
This commit is contained in:
Sefik Ilkin Serengil 2024-04-30 20:23:45 +01:00 committed by GitHub
commit 4c7109d74c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 2400 additions and 21 deletions

6
.gitignore vendored
View File

@ -12,4 +12,8 @@ tests/*.ipynb
tests/*.csv
*.pyc
**/.coverage
**/.coverage.*
**/.coverage.*
benchmarks/results
benchmarks/outputs
benchmarks/dataset
benchmarks/lfwe

View File

@ -4,7 +4,22 @@ Please cite deepface in your publications if it helps your research. Here are it
### Facial Recognition
If you use deepface in your research for facial recogntion purposes, please cite the this publication.
If you use deepface in your research for facial recogntion purposes, please cite these publications:
```BibTeX
@article{serengil2024lightface,
title = {A Benchmark of Facial Recognition Pipelines and Co-Usability Performances of Modules},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
journal = {Bilisim Teknolojileri Dergisi},
volume = {17},
number = {2},
pages = {95-107},
year = {2024},
doi = {10.17671/gazibtd.1399077},
url = {https://dergipark.org.tr/en/pub/gazibtd/issue/84331/1399077},
publisher = {Gazi University}
}
```
```BibTeX
@inproceedings{serengil2020lightface,
@ -14,14 +29,14 @@ If you use deepface in your research for facial recogntion purposes, please cite
pages = {23-27},
year = {2020},
doi = {10.1109/ASYU50717.2020.9259802},
url = {https://doi.org/10.1109/ASYU50717.2020.9259802},
url = {https://ieeexplore.ieee.org/document/9259802},
organization = {IEEE}
}
```
### Facial Attribute Analysis
If you use deepface in your research for facial attribute analysis purposes such as age, gender, emotion or ethnicity prediction or face detection purposes, please cite the this publication.
If you use deepface in your research for facial attribute analysis purposes such as age, gender, emotion or ethnicity prediction, please cite the this publication.
```BibTeX
@inproceedings{serengil2021lightface,
@ -31,11 +46,26 @@ If you use deepface in your research for facial attribute analysis purposes such
pages = {1-4},
year = {2021},
doi = {10.1109/ICEET53442.2021.9659697},
url = {https://doi.org/10.1109/ICEET53442.2021.9659697},
url = {https://ieeexplore.ieee.org/document/9659697/},
organization = {IEEE}
}
```
### Additional Papers
We have additionally released these papers within the DeepFace project for a multitude of purposes.
```BibTeX
@misc{serengil2023db,
title = {An evaluation of sql and nosql databases for facial recognition pipelines},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
year = {2023},
archivePrefix = {Cambridge Open Engage},
doi = {10.33774/coe-2023-18rcn},
url = {https://www.cambridge.org/engage/coe/article-details/63f3e5541d2d184063d4f569}
}
```
### Repositories
Also, if you use deepface in your GitHub projects, please add `deepface` in the `requirements.txt`. Thereafter, your project will be listed in its [dependency graph](https://github.com/serengil/deepface/network/dependents).

View File

@ -136,20 +136,21 @@ embedding_objs = DeepFace.represent(img_path = "img.jpg",
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/model-portfolio-20240316.jpg" width="95%" height="95%"></p>
FaceNet, VGG-Face, ArcFace and Dlib are [overperforming](https://youtu.be/i_MOwvhbLdI) ones based on experiments. You can find out the scores of those models below on [Labeled Faces in the Wild](https://sefiks.com/2020/08/27/labeled-faces-in-the-wild-for-face-recognition/) set declared by its creators.
FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments - see [`BENCHMARKS`](https://github.com/serengil/deepface/tree/master/benchmarks) for more details. You can find the measured scores of various models in DeepFace and the reported scores from their original studies in the following table.
| Model | Declared LFW Score |
| -------------- | ------------------ |
| VGG-Face | 98.9% |
| Facenet | 99.2% |
| Facenet512 | 99.6% |
| OpenFace | 92.9% |
| DeepID | 97.4% |
| Dlib | 99.3 % |
| SFace | 99.5% |
| ArcFace | 99.5% |
| GhostFaceNet | 99.7% |
| *Human-beings* | *97.5%* |
| Model | Measured Score | Declared Score |
| -------------- | -------------- | ------------------ |
| Facenet512 | 98.4% | 99.6% |
| Human-beings | 97.5% | 97.5% |
| Facenet | 97.4% | 99.2% |
| Dlib | 96.8% | 99.3 % |
| VGG-Face | 96.7% | 98.9% |
| ArcFace | 96.7% | 99.5% |
| GhostFaceNet | 93.3% | 99.7% |
| SFace | 93.0% | 99.5% |
| OpenFace | 78.7% | 92.9% |
| DeepFace | 69.0% | 97.3% |
| DeepID | 66.5% | 97.4% |
Conducting experiments with those models within DeepFace may reveal disparities compared to the original studies, owing to the adoption of distinct detection or normalization techniques. Furthermore, some models have been released solely with their backbones, lacking pre-trained weights. Thus, we are utilizing their re-implementations instead of the original pre-trained weights.
@ -194,7 +195,7 @@ Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision an
**Face Detectors** - [`Demo`](https://youtu.be/GZ2p2hj2H5k)
Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. [`OpenCV`](https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/), [`Ssd`](https://sefiks.com/2020/08/25/deep-face-detection-with-opencv-in-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/), [`MtCnn`](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/), `Faster MTCNN`, [`RetinaFace`](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/), [`MediaPipe`](https://sefiks.com/2022/01/14/deep-face-detection-with-mediapipe/), `Yolo`, `YuNet` and `CenterFace` detectors are wrapped in deepface.
Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. [`OpenCV`](https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/), [`Ssd`](https://sefiks.com/2020/08/25/deep-face-detection-with-opencv-in-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/), [`MtCnn`](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/), `Faster MtCnn`, [`RetinaFace`](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/), [`MediaPipe`](https://sefiks.com/2022/01/14/deep-face-detection-with-mediapipe/), `Yolo`, `YuNet` and `CenterFace` detectors are wrapped in deepface.
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/detector-portfolio-v6.jpg" width="95%" height="95%"></p>
@ -246,7 +247,7 @@ Face recognition models are actually CNN models and they expect standard sized i
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/detector-outputs-20240414.jpg" width="90%" height="90%"></p>
[RetinaFace](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/) and [MTCNN](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/) seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.
[RetinaFace](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/) and [MtCnn](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/) seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.
The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.
@ -336,7 +337,22 @@ You can also support this work on [Patreon](https://www.patreon.com/serengil?rep
Please cite deepface in your publications if it helps your research - see [`CITATIONS`](https://github.com/serengil/deepface/blob/master/CITATION.md) for more details. Here are its BibTex entries:
If you use deepface in your research for facial recogntion purposes, please cite this publication.
If you use deepface in your research for facial recogntion purposes, please cite these publications:
```BibTeX
@article{serengil2024lightface,
title = {A Benchmark of Facial Recognition Pipelines and Co-Usability Performances of Modules},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
journal = {Bilisim Teknolojileri Dergisi},
volume = {17},
number = {2},
pages = {95-107},
year = {2024},
doi = {10.17671/gazibtd.1399077},
url = {https://dergipark.org.tr/en/pub/gazibtd/issue/84331/1399077},
publisher = {Gazi University}
}
```
```BibTeX
@inproceedings{serengil2020lightface,
@ -374,4 +390,5 @@ DeepFace is licensed under the MIT License - see [`LICENSE`](https://github.com/
DeepFace wraps some external face recognition models: [VGG-Face](http://www.robots.ox.ac.uk/~vgg/software/vgg_face/), [Facenet](https://github.com/davidsandberg/facenet/blob/master/LICENSE.md), [OpenFace](https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/LICENSE), [DeepFace](https://github.com/swghosh/DeepFace), [DeepID](https://github.com/Ruoyiran/DeepID/blob/master/LICENSE.md), [ArcFace](https://github.com/leondgarse/Keras_insightface/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/dlib/LICENSE.txt), [SFace](https://github.com/opencv/opencv_zoo/blob/master/models/face_recognition_sface/LICENSE) and [GhostFaceNet](https://github.com/HamadYA/GhostFaceNets/blob/main/LICENSE). Besides, age, gender and race / ethnicity models were trained on the backbone of VGG-Face with transfer learning. Similarly, DeepFace wraps many face detectors: [OpenCv](https://github.com/opencv/opencv/blob/4.x/LICENSE), [Ssd](https://github.com/opencv/opencv/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/LICENSE.txt), [MtCnn](https://github.com/ipazc/mtcnn/blob/master/LICENSE), [Fast MtCnn](https://github.com/timesler/facenet-pytorch/blob/master/LICENSE.md), [RetinaFace](https://github.com/serengil/retinaface/blob/master/LICENSE), [MediaPipe](https://github.com/google/mediapipe/blob/master/LICENSE), [YuNet](https://github.com/ShiqiYu/libfacedetection/blob/master/LICENSE), [Yolo](https://github.com/derronqi/yolov8-face/blob/main/LICENSE) and [CenterFace](https://github.com/Star-Clouds/CenterFace/blob/master/LICENSE). License types will be inherited when you intend to utilize those models. Please check the license types of those models for production purposes.
DeepFace [logo](https://thenounproject.com/term/face-recognition/2965879/) is created by [Adrien Coquet](https://thenounproject.com/coquet_adrien/) and it is licensed under [Creative Commons: By Attribution 3.0 License](https://creativecommons.org/licenses/by/3.0/).

1844
benchmarks/Evaluate-Results.ipynb vendored Normal file

File diff suppressed because one or more lines are too long

352
benchmarks/Perform-Experiments.ipynb vendored Normal file
View File

@ -0,0 +1,352 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "8133a99d",
"metadata": {},
"source": [
"# Perform Experiments with DeepFace on LFW dataset"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "5aab0cbe",
"metadata": {},
"outputs": [],
"source": [
"# built-in dependencies\n",
"import os\n",
"\n",
"# 3rd party dependencies\n",
"import numpy as np\n",
"import pandas as pd\n",
"from tqdm import tqdm\n",
"import matplotlib.pyplot as plt\n",
"from sklearn.metrics import accuracy_score\n",
"from sklearn.datasets import fetch_lfw_pairs\n",
"from deepface import DeepFace"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "64c9ed9a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This experiment is done with pip package of deepface with 0.0.90 version\n"
]
}
],
"source": [
"print(f\"This experiment is done with pip package of deepface with {DeepFace.__version__} version\")"
]
},
{
"cell_type": "markdown",
"id": "feaec973",
"metadata": {},
"source": [
"### Configuration Sets"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "453104b4",
"metadata": {},
"outputs": [],
"source": [
"# all configuration alternatives for 4 dimensions of arguments\n",
"alignment = [True, False]\n",
"models = [\"Facenet512\", \"Facenet\", \"VGG-Face\", \"ArcFace\", \"Dlib\", \"GhostFaceNet\", \"SFace\", \"OpenFace\", \"DeepFace\", \"DeepID\"]\n",
"detectors = [\"retinaface\", \"mtcnn\", \"fastmtcnn\", \"dlib\", \"yolov8\", \"yunet\", \"centerface\", \"mediapipe\", \"ssd\", \"opencv\", \"skip\"]\n",
"metrics = [\"euclidean\", \"euclidean_l2\", \"cosine\"]\n",
"expand_percentage = 0"
]
},
{
"cell_type": "markdown",
"id": "c9aeb57a",
"metadata": {},
"source": [
"### Create Required Folders if necessary"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "671d8a00",
"metadata": {},
"outputs": [],
"source": [
"target_paths = [\"lfwe\", \"dataset\", \"outputs\", \"outputs/test\", \"results\"]\n",
"for target_path in target_paths:\n",
" if os.path.exists(target_path) != True:\n",
" os.mkdir(target_path)\n",
" print(f\"{target_path} is just created\")"
]
},
{
"cell_type": "markdown",
"id": "fc31f03a",
"metadata": {},
"source": [
"### Load LFW Dataset"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "721a7d70",
"metadata": {},
"outputs": [],
"source": [
"pairs_touch = \"outputs/test_lfwe.txt\"\n",
"instances = 1000 #pairs.shape[0]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "010184d8",
"metadata": {},
"outputs": [],
"source": [
"target_path = \"dataset/test_lfw.npy\"\n",
"labels_path = \"dataset/test_labels.npy\"\n",
"\n",
"if os.path.exists(target_path) != True:\n",
" fetch_lfw_pairs = fetch_lfw_pairs(subset = 'test', color = True\n",
" , resize = 2\n",
" , funneled = False\n",
" , slice_=None\n",
" )\n",
" pairs = fetch_lfw_pairs.pairs\n",
" labels = fetch_lfw_pairs.target\n",
" target_names = fetch_lfw_pairs.target_names\n",
" np.save(target_path, pairs)\n",
" np.save(labels_path, labels)\n",
"else:\n",
" if os.path.exists(pairs_touch) != True:\n",
" # loading pairs takes some time. but if we extract these pairs as image, no need to load it anymore\n",
" pairs = np.load(target_path)\n",
" labels = np.load(labels_path) "
]
},
{
"cell_type": "markdown",
"id": "005f582e",
"metadata": {},
"source": [
"### Save LFW image pairs into file system"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "5bc23313",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 1000/1000 [00:00<00:00, 190546.25it/s]\n"
]
}
],
"source": [
"for i in tqdm(range(0, instances)):\n",
" img1_target = f\"lfwe/test/{i}_1.jpg\"\n",
" img2_target = f\"lfwe/test/{i}_2.jpg\"\n",
" \n",
" if os.path.exists(img1_target) != True:\n",
" img1 = pairs[i][0]\n",
" # plt.imsave(img1_target, img1/255) #works for my mac\n",
" plt.imsave(img1_target, img1) #works for my debian\n",
" \n",
" if os.path.exists(img2_target) != True:\n",
" img2 = pairs[i][1]\n",
" # plt.imsave(img2_target, img2/255) #works for my mac\n",
" plt.imsave(img2_target, img2) #works for my debian\n",
" \n",
"if os.path.exists(pairs_touch) != True:\n",
" open(pairs_touch,'a').close()"
]
},
{
"cell_type": "markdown",
"id": "6f8fa8fa",
"metadata": {},
"source": [
"### Perform Experiments\n",
"\n",
"This block will save the experiments results in outputs folder"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e7fba936",
"metadata": {},
"outputs": [],
"source": [
"for model_name in models:\n",
" for detector_backend in detectors:\n",
" for distance_metric in metrics:\n",
" for align in alignment:\n",
" \n",
" if detector_backend == \"skip\" and align is True:\n",
" # Alignment is not possible for a skipped detector configuration\n",
" continue\n",
" \n",
" alignment_text = \"aligned\" if align is True else \"unaligned\"\n",
" task = f\"{model_name}_{detector_backend}_{distance_metric}_{alignment_text}\"\n",
" output_file = f\"outputs/test/{task}.csv\"\n",
" if os.path.exists(output_file) is True:\n",
" #print(f\"{output_file} is available already\")\n",
" continue\n",
" \n",
" distances = []\n",
" for i in tqdm(range(0, instances), desc = task):\n",
" img1_target = f\"lfwe/test/{i}_1.jpg\"\n",
" img2_target = f\"lfwe/test/{i}_2.jpg\"\n",
" result = DeepFace.verify(\n",
" img1_path=img1_target,\n",
" img2_path=img2_target,\n",
" model_name=model_name,\n",
" detector_backend=detector_backend,\n",
" distance_metric=distance_metric,\n",
" align=align,\n",
" enforce_detection=False,\n",
" expand_percentage=expand_percentage,\n",
" )\n",
" distance = result[\"distance\"]\n",
" distances.append(distance)\n",
" # -----------------------------------\n",
" df = pd.DataFrame(list(labels), columns = [\"actuals\"])\n",
" df[\"distances\"] = distances\n",
" df.to_csv(output_file, index=False)"
]
},
{
"cell_type": "markdown",
"id": "a0b8dafa",
"metadata": {},
"source": [
"### Calculate Results\n",
"\n",
"Experiments were responsible for calculating distances. We will calculate the best accuracy scores in this block."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "67376e76",
"metadata": {},
"outputs": [],
"source": [
"data = [[0 for _ in range(len(models))] for _ in range(len(detectors))]\n",
"base_df = pd.DataFrame(data, columns=models, index=detectors)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f2cc536b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"results/pivot_euclidean_with_alignment_True.csv saved\n",
"results/pivot_euclidean_l2_with_alignment_True.csv saved\n",
"results/pivot_cosine_with_alignment_True.csv saved\n",
"results/pivot_euclidean_with_alignment_False.csv saved\n",
"results/pivot_euclidean_l2_with_alignment_False.csv saved\n",
"results/pivot_cosine_with_alignment_False.csv saved\n"
]
}
],
"source": [
"for is_aligned in alignment:\n",
" for distance_metric in metrics:\n",
"\n",
" current_df = base_df.copy()\n",
" \n",
" target_file = f\"results/pivot_{distance_metric}_with_alignment_{is_aligned}.csv\"\n",
" if os.path.exists(target_file):\n",
" continue\n",
" \n",
" for model_name in models:\n",
" for detector_backend in detectors:\n",
"\n",
" align = \"aligned\" if is_aligned is True else \"unaligned\"\n",
"\n",
" if detector_backend == \"skip\" and is_aligned is True:\n",
" # Alignment is not possible for a skipped detector configuration\n",
" align = \"unaligned\"\n",
"\n",
" source_file = f\"outputs/test/{model_name}_{detector_backend}_{distance_metric}_{align}.csv\"\n",
" df = pd.read_csv(source_file)\n",
" \n",
" positive_mean = df[(df[\"actuals\"] == True) | (df[\"actuals\"] == 1)][\"distances\"].mean()\n",
" negative_mean = df[(df[\"actuals\"] == False) | (df[\"actuals\"] == 0)][\"distances\"].mean()\n",
"\n",
" distances = sorted(df[\"distances\"].values.tolist())\n",
"\n",
" items = []\n",
" for i, distance in enumerate(distances):\n",
" if distance >= positive_mean and distance <= negative_mean:\n",
" sandbox_df = df.copy()\n",
" sandbox_df[\"predictions\"] = False\n",
" idx = sandbox_df[sandbox_df[\"distances\"] < distance].index\n",
" sandbox_df.loc[idx, \"predictions\"] = True\n",
"\n",
" actuals = sandbox_df.actuals.values.tolist()\n",
" predictions = sandbox_df.predictions.values.tolist()\n",
" accuracy = 100*accuracy_score(actuals, predictions)\n",
" items.append((distance, accuracy))\n",
"\n",
" pivot_df = pd.DataFrame(items, columns = [\"distance\", \"accuracy\"])\n",
" pivot_df = pivot_df.sort_values(by = [\"accuracy\"], ascending = False)\n",
" threshold = pivot_df.iloc[0][\"distance\"]\n",
" # print(f\"threshold for {model_name}/{detector_backend} is {threshold}\")\n",
" accuracy = pivot_df.iloc[0][\"accuracy\"]\n",
"\n",
" # print(source_file, round(accuracy, 1))\n",
" current_df.at[detector_backend, model_name] = round(accuracy, 1)\n",
" \n",
" current_df.to_csv(target_file)\n",
" print(f\"{target_file} saved\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

132
benchmarks/README.md Normal file
View File

@ -0,0 +1,132 @@
# Benchmarks
DeepFace offers various configurations that significantly impact accuracy, including the facial recognition model, face detector model, distance metric, and alignment mode. Our experiments conducted on the [LFW dataset](https://sefiks.com/2020/08/27/labeled-faces-in-the-wild-for-face-recognition/) using different combinations of these configurations yield the following results.
You can reproduce the results by executing the `Perform-Experiments.ipynb` and `Evaluate-Results.ipynb` notebooks, respectively.
## ROC Curves
ROC curves provide a valuable means of evaluating the performance of different models on a broader scale. The following illusration shows ROC curves for different facial recognition models alongside their optimal configurations yielding the highest accuracy scores.
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/benchmarks.jpg" width="95%" height="95%"></p>
In summary, FaceNet-512d surpasses human-level accuracy, while FaceNet-128d reaches it, with Dlib, VGG-Face, and ArcFace closely trailing but slightly below, and GhostFaceNet and SFace making notable contributions despite not leading, while OpenFace, DeepFace, and DeepId exhibit lower performance.
## Accuracy Scores
Please note that humans achieve a 97.5% accuracy score on the same dataset. Configurations that outperform this benchmark are highlighted in bold.
## Performance Matrix for euclidean while alignment is True
| | Facenet512 |Facenet |VGG-Face |ArcFace |Dlib |GhostFaceNet |SFace |OpenFace |DeepFace |DeepID |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| retinaface |95.9 |93.5 |95.8 |85.2 |88.9 |85.9 |80.2 |69.4 |67.0 |65.6 |
| mtcnn |95.2 |93.8 |95.9 |83.7 |89.4 |83.0 |77.4 |70.2 |66.5 |63.3 |
| fastmtcnn |96.0 |93.4 |95.8 |83.5 |91.1 |82.8 |77.7 |69.4 |66.7 |64.0 |
| dlib |96.0 |90.8 |94.5 |88.6 |96.8 |65.7 |66.3 |75.8 |63.4 |60.4 |
| yolov8 |94.4 |91.9 |95.0 |84.1 |89.2 |77.6 |73.4 |68.7 |69.0 |66.5 |
| yunet |97.3 |96.1 |96.0 |84.9 |92.2 |84.0 |79.4 |70.9 |65.8 |65.2 |
| centerface |**97.6** |95.8 |95.7 |83.6 |90.4 |82.8 |77.4 |68.9 |65.5 |62.8 |
| mediapipe |95.1 |88.6 |92.9 |73.2 |93.1 |63.2 |72.5 |78.7 |61.8 |62.2 |
| ssd |88.9 |85.6 |87.0 |75.8 |83.1 |79.1 |76.9 |66.8 |63.4 |62.5 |
| opencv |88.2 |84.2 |87.3 |73.0 |84.4 |83.8 |81.1 |66.4 |65.5 |59.6 |
| skip |92.0 |64.1 |90.6 |56.6 |69.0 |75.1 |81.4 |57.4 |60.8 |60.7 |
## Performance Matrix for euclidean while alignment is False
| | Facenet512 |Facenet |VGG-Face |ArcFace |Dlib |GhostFaceNet |SFace |OpenFace |DeepFace |DeepID |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| retinaface |96.1 |92.8 |95.7 |84.1 |88.3 |83.2 |78.6 |70.8 |67.4 |64.3 |
| mtcnn |95.9 |92.5 |95.5 |81.8 |89.3 |83.2 |76.3 |70.9 |65.9 |63.2 |
| fastmtcnn |96.3 |93.0 |96.0 |82.2 |90.0 |82.7 |76.8 |71.2 |66.5 |64.3 |
| dlib |96.0 |89.0 |94.1 |82.6 |96.3 |65.6 |73.1 |75.9 |61.8 |61.9 |
| yolov8 |94.8 |90.8 |95.2 |83.2 |88.4 |77.6 |71.6 |68.9 |68.2 |66.3 |
| yunet |**97.9** |96.5 |96.3 |84.1 |91.4 |82.7 |78.2 |71.7 |65.5 |65.2 |
| centerface |97.4 |95.4 |95.8 |83.2 |90.3 |82.0 |76.5 |69.9 |65.7 |62.9 |
| mediapipe |94.9 |87.1 |93.1 |71.1 |91.9 |61.9 |73.2 |77.6 |61.7 |62.4 |
| ssd |97.2 |94.9 |96.7 |83.9 |88.6 |84.9 |82.0 |69.9 |66.7 |64.0 |
| opencv |94.1 |90.2 |95.8 |89.8 |91.2 |91.0 |86.9 |71.1 |68.4 |61.1 |
| skip |92.0 |64.1 |90.6 |56.6 |69.0 |75.1 |81.4 |57.4 |60.8 |60.7 |
## Performance Matrix for euclidean_l2 while alignment is True
| | Facenet512 |Facenet |VGG-Face |ArcFace |Dlib |GhostFaceNet |SFace |OpenFace |DeepFace |DeepID |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| retinaface |**98.4** |96.4 |95.8 |96.6 |89.1 |90.5 |92.4 |69.4 |67.7 |64.4 |
| mtcnn |**97.6** |96.8 |95.9 |96.0 |90.0 |89.8 |90.5 |70.2 |66.4 |64.0 |
| fastmtcnn |**98.1** |97.2 |95.8 |96.4 |91.0 |89.5 |90.0 |69.4 |67.4 |64.1 |
| dlib |97.0 |92.6 |94.5 |95.1 |96.4 |63.3 |69.8 |75.8 |66.5 |59.5 |
| yolov8 |97.3 |95.7 |95.0 |95.5 |88.8 |88.9 |91.9 |68.7 |67.5 |66.0 |
| yunet |**97.9** |97.4 |96.0 |96.7 |91.6 |89.1 |91.0 |70.9 |66.5 |63.6 |
| centerface |**97.7** |96.8 |95.7 |96.5 |90.9 |87.5 |89.3 |68.9 |67.8 |64.0 |
| mediapipe |96.1 |90.6 |92.9 |90.3 |92.6 |64.4 |75.4 |78.7 |64.7 |63.0 |
| ssd |88.7 |87.5 |87.0 |86.2 |83.3 |82.2 |84.6 |66.8 |64.1 |62.6 |
| opencv |87.6 |84.8 |87.3 |84.6 |84.0 |85.0 |83.6 |66.4 |63.8 |60.9 |
| skip |91.4 |67.6 |90.6 |57.2 |69.3 |78.4 |83.4 |57.4 |62.6 |61.6 |
## Performance Matrix for euclidean_l2 while alignment is False
| | Facenet512 |Facenet |VGG-Face |ArcFace |Dlib |GhostFaceNet |SFace |OpenFace |DeepFace |DeepID |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| retinaface |**98.0** |95.9 |95.7 |95.7 |88.4 |89.5 |90.6 |70.8 |67.7 |64.6 |
| mtcnn |**97.8** |96.2 |95.5 |95.9 |89.2 |88.0 |91.1 |70.9 |67.0 |64.0 |
| fastmtcnn |**97.7** |96.6 |96.0 |95.9 |89.6 |87.8 |89.7 |71.2 |67.8 |64.2 |
| dlib |96.5 |89.9 |94.1 |93.8 |95.6 |63.0 |75.0 |75.9 |62.6 |61.8 |
| yolov8 |**97.7** |95.8 |95.2 |95.0 |88.1 |88.7 |89.8 |68.9 |68.9 |65.3 |
| yunet |**98.3** |96.8 |96.3 |96.1 |91.7 |88.0 |90.5 |71.7 |67.6 |63.2 |
| centerface |97.4 |96.3 |95.8 |95.8 |90.2 |86.8 |89.3 |69.9 |68.4 |63.1 |
| mediapipe |96.3 |90.0 |93.1 |89.3 |91.8 |65.6 |74.6 |77.6 |64.9 |61.6 |
| ssd |**97.9** |97.0 |96.7 |96.6 |89.4 |91.5 |93.0 |69.9 |68.7 |64.9 |
| opencv |96.2 |92.9 |95.8 |93.2 |91.5 |93.3 |91.7 |71.1 |68.3 |61.6 |
| skip |91.4 |67.6 |90.6 |57.2 |69.3 |78.4 |83.4 |57.4 |62.6 |61.6 |
## Performance Matrix for cosine while alignment is True
| | Facenet512 |Facenet |VGG-Face |ArcFace |Dlib |GhostFaceNet |SFace |OpenFace |DeepFace |DeepID |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| retinaface |**98.4** |96.4 |95.8 |96.6 |89.1 |90.5 |92.4 |69.4 |67.7 |64.4 |
| mtcnn |**97.6** |96.8 |95.9 |96.0 |90.0 |89.8 |90.5 |70.2 |66.3 |63.0 |
| fastmtcnn |**98.1** |97.2 |95.8 |96.4 |91.0 |89.5 |90.0 |69.4 |67.4 |63.6 |
| dlib |97.0 |92.6 |94.5 |95.1 |96.4 |63.3 |69.8 |75.8 |66.5 |58.7 |
| yolov8 |97.3 |95.7 |95.0 |95.5 |88.8 |88.9 |91.9 |68.7 |67.5 |65.9 |
| yunet |**97.9** |97.4 |96.0 |96.7 |91.6 |89.1 |91.0 |70.9 |66.5 |63.5 |
| centerface |**97.7** |96.8 |95.7 |96.5 |90.9 |87.5 |89.3 |68.9 |67.8 |63.6 |
| mediapipe |96.1 |90.6 |92.9 |90.3 |92.6 |64.3 |75.4 |78.7 |64.8 |63.0 |
| ssd |88.7 |87.5 |87.0 |86.2 |83.3 |82.2 |84.5 |66.8 |63.8 |62.6 |
| opencv |87.6 |84.9 |87.2 |84.6 |84.0 |85.0 |83.6 |66.2 |63.7 |60.1 |
| skip |91.4 |67.6 |90.6 |54.8 |69.3 |78.4 |83.4 |57.4 |62.6 |61.1 |
## Performance Matrix for cosine while alignment is False
| | Facenet512 |Facenet |VGG-Face |ArcFace |Dlib |GhostFaceNet |SFace |OpenFace |DeepFace |DeepID |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| retinaface |**98.0** |95.9 |95.7 |95.7 |88.4 |89.5 |90.6 |70.8 |67.7 |63.7 |
| mtcnn |**97.8** |96.2 |95.5 |95.9 |89.2 |88.0 |91.1 |70.9 |67.0 |64.0 |
| fastmtcnn |**97.7** |96.6 |96.0 |95.9 |89.6 |87.8 |89.7 |71.2 |67.8 |62.7 |
| dlib |96.5 |89.9 |94.1 |93.8 |95.6 |63.0 |75.0 |75.9 |62.6 |61.7 |
| yolov8 |**97.7** |95.8 |95.2 |95.0 |88.1 |88.7 |89.8 |68.9 |68.9 |65.3 |
| yunet |**98.3** |96.8 |96.3 |96.1 |91.7 |88.0 |90.5 |71.7 |67.6 |63.2 |
| centerface |97.4 |96.3 |95.8 |95.8 |90.2 |86.8 |89.3 |69.9 |68.4 |62.6 |
| mediapipe |96.3 |90.0 |93.1 |89.3 |91.8 |64.8 |74.6 |77.6 |64.9 |61.6 |
| ssd |**97.9** |97.0 |96.7 |96.6 |89.4 |91.5 |93.0 |69.9 |68.7 |63.8 |
| opencv |96.2 |92.9 |95.8 |93.2 |91.5 |93.3 |91.7 |71.1 |68.1 |61.1 |
| skip |91.4 |67.6 |90.6 |54.8 |69.3 |78.4 |83.4 |57.4 |62.6 |61.1 |
# Citation
Please cite deepface in your publications if it helps your research - see [`CITATIONS`](https://github.com/serengil/deepface/blob/master/CITATION.md) for more details. Here is its BibTex entry:
```BibTeX
@article{serengil2024lightface,
title = {A Benchmark of Facial Recognition Pipelines and Co-Usability Performances of Modules},
author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
journal = {Bilisim Teknolojileri Dergisi},
volume = {17},
number = {2},
pages = {95-107},
year = {2024},
doi = {10.17671/gazibtd.1399077},
url = {https://dergipark.org.tr/en/pub/gazibtd/issue/84331/1399077},
publisher = {Gazi University}
}
```

BIN
icon/benchmarks.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB