mirror of
https://github.com/serengil/deepface.git
synced 2025-07-22 01:40:01 +00:00
score table
This commit is contained in:
parent
96b8f9878f
commit
97faf237ae
13
README.md
13
README.md
@ -65,7 +65,18 @@ df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_
|
||||
|
||||
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/deepface-wrapped-models.png" width="95%" height="95%"></p>
|
||||
|
||||
FaceNet, VGG-Face, ArcFace and Dlib [overperforms](https://youtu.be/i_MOwvhbLdI) than OpenFace, DeepFace and DeepID based on experiments. Supportively, FaceNet /w 512d got 99.65%; FaceNet /w 128d got 99.2%; ArcFace got 99.41%; Dlib got 99.38%; VGG-Face got 98.78%; DeepID got 97.05; OpenFace got 93.80% accuracy scores on [LFW data set](https://sefiks.com/2020/08/27/labeled-faces-in-the-wild-for-face-recognition/) whereas human beings could have just 97.53%.
|
||||
FaceNet, VGG-Face, ArcFace and Dlib are [overperforming](https://youtu.be/i_MOwvhbLdI) ones based on experiments. You can find the scores of those models on both [Labeled Faces in the Wild](https://sefiks.com/2020/08/27/labeled-faces-in-the-wild-for-face-recognition/) and YouTube Faces in the Wild data sets declared by its creators.
|
||||
|
||||
| Model | LFW Score | YFW Score |
|
||||
| --- | --- | --- |
|
||||
| Facenet512 | 99.65% | - |
|
||||
| ArcFace | 99.41% | - |
|
||||
| Dlib | 99.38 % | - |
|
||||
| Facenet | 99.20% | - |
|
||||
| VGG-Face | 98.78% | 97.40% |
|
||||
| Human-beings | 97.53% | - |
|
||||
| OpenFace | 93.80% | - |
|
||||
| DeepID | - | 97.05% |
|
||||
|
||||
**Similarity**
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user