diff --git a/README.md b/README.md index 4211239..be961b9 100644 --- a/README.md +++ b/README.md @@ -67,14 +67,14 @@ df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_ FaceNet, VGG-Face, ArcFace and Dlib are [overperforming](https://youtu.be/i_MOwvhbLdI) ones based on experiments. You can find the scores of those models on both [Labeled Faces in the Wild](https://sefiks.com/2020/08/27/labeled-faces-in-the-wild-for-face-recognition/) and YouTube Faces in the Wild data sets declared by its creators. -| Model | LFW Score | YFW Score | +| Model | LFW Score | YTF Score | | --- | --- | --- | | Facenet512 | 99.65% | - | | ArcFace | 99.41% | - | | Dlib | 99.38 % | - | | Facenet | 99.20% | - | | VGG-Face | 98.78% | 97.40% | -| Human-beings | 97.53% | - | +| *Human-beings* | 97.53% | - | | OpenFace | 93.80% | - | | DeepID | - | 97.05% |