mirror of
https://github.com/serengil/deepface.git
synced 2025-06-05 19:15:23 +00:00
fb deepface model added
This commit is contained in:
parent
1d7de9383c
commit
c894020298
36
README.md
36
README.md
@ -4,7 +4,7 @@
|
||||
|
||||
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/deepface-icon.png" width="20%" height="20%"></p>
|
||||
|
||||
**deepface** is a lightweight python based facial analysis framework including face recognition and demography (age, gender, emotion and race). You can use the framework with a just few lines of codes.
|
||||
**deepface** is a lightweight facial analysis framework including face recognition and demography (age, gender, emotion and race) for Python. You can use the framework with a just few lines of codes.
|
||||
|
||||
# Face Recognition
|
||||
|
||||
@ -17,23 +17,26 @@ result = DeepFace.verify("img1.jpg", "img2.jpg")
|
||||
|
||||
<p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/tests/dataset/test-case-1.jpg" width="50%" height="50%"></p>
|
||||
|
||||
```
|
||||
Model: VGG-Face
|
||||
Similarity metric: Cosine
|
||||
Max Threshold to Verify: 0.40
|
||||
Found Distance: 0.25638097524642944
|
||||
Result: They are same
|
||||
```json
|
||||
{
|
||||
"verified": true,
|
||||
"distance": 0.25638097524642944,
|
||||
"max_threshold_to_verify": 0.40,
|
||||
"model": "VGG-Face",
|
||||
"similarity_metric": "cosine"
|
||||
}
|
||||
```
|
||||
|
||||
## Face recognition models
|
||||
|
||||
Face recognition can be handled by different models. Currently, [`VGG-Face`](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) , [`Facenet`](https://sefiks.com/2018/09/03/face-recognition-with-facenet-in-keras/) and [`OpenFace`](https://sefiks.com/2019/07/21/face-recognition-with-openface-in-keras/) models are supported in deepface. The default configuration verifies faces with **VGG-Face** model. You can set the base model while verification as illustared below. Accuracy and speed show difference based on the performing model.
|
||||
Face recognition can be handled by different models. Currently, [`VGG-Face`](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) , [`Google Facenet`](https://sefiks.com/2018/09/03/face-recognition-with-facenet-in-keras/), [`OpenFace`](https://sefiks.com/2019/07/21/face-recognition-with-openface-in-keras/) and `Facebook DeepFace` models are supported in deepface. The default configuration verifies faces with **VGG-Face** model. You can set the base model while verification as illustared below. Accuracy and speed show difference based on the performing model.
|
||||
|
||||
```python
|
||||
vggface_result = DeepFace.verify("img1.jpg", "img2.jpg") #default is VGG-Face
|
||||
#vggface_result = DeepFace.verify("img1.jpg", "img2.jpg", model_name = "VGG-Face") #identical to the line above
|
||||
facenet_result = DeepFace.verify("img1.jpg", "img2.jpg", model_name = "Facenet")
|
||||
openface_result = DeepFace.verify("img1.jpg", "img2.jpg", model_name = "OpenFace")
|
||||
deepface_result = DeepFace.verify("img1.jpg", "img2.jpg", model_name = "DeepFace")
|
||||
```
|
||||
|
||||
VGG-Face has the highest accuracy score but it is not convenient for real time studies because of its complex structure. Facenet is a complex model as well. On the other hand, OpenFace has a close accuracy score but it performs the fastest. That's why, OpenFace is much more convenient for real time studies.
|
||||
@ -50,18 +53,11 @@ result = DeepFace.verify("img1.jpg", "img2.jpg", model_name = "VGG-Face", distan
|
||||
|
||||
## Verification
|
||||
|
||||
Verification function returns a tuple including boolean verification result, distance between two faces and max threshold to identify (this shows difference based on face recognition model and similarity metric).
|
||||
|
||||
```
|
||||
(True, 0.281734, 0.30)
|
||||
```
|
||||
|
||||
You can just check the verification result to decide that two images are same person or not. Thresholds for distance metrics are already tuned in the framework for face recognition models and distance metrics.
|
||||
Verification function returns a json object including verification result based on found distance and tuned threshold. You can check the verification result by accessing the attribute in the json object.
|
||||
|
||||
```python
|
||||
verified = result[0] #returns True if images are same person's face
|
||||
found_distance = result[1] #distance of two face vectors
|
||||
max_threshold_to_verify = result[2] #faces have a distance less than this value will be verified
|
||||
result = DeepFace.verify("img1.jpg", "img2.jpg")
|
||||
is_verified = result["verified"]
|
||||
```
|
||||
|
||||
# Facial Attribute Analysis
|
||||
@ -149,7 +145,9 @@ Deepface is mentioned in this [youtube playlist](https://www.youtube.com/watch?v
|
||||
|
||||
Reference face recognition models have different type of licenses. This framework is just a wrapper for those models. That's why, licence types are inherited as well. You should check the licenses for the face recognition models before use.
|
||||
|
||||
Herein, [OpenFace](https://github.com/cmusatyalab/openface/blob/master/LICENSE) is licensed under Apache License 2.0, and [Facenet](https://github.com/davidsandberg/facenet/blob/master/LICENSE.md) is licensed under MIT License. They both allow you to use commercial use. On the other hand, [VGG-Face](http://www.robots.ox.ac.uk/~vgg/software/vgg_face/) is licensed under Creative Commons Attribution License. That's why, it is restricted to adopt VGG-Face for commercial use.
|
||||
Herein, [OpenFace](https://github.com/cmusatyalab/openface/blob/master/LICENSE) is licensed under Apache License 2.0. [FB DeepFace](https://github.com/swghosh/DeepFace) and [Facenet](https://github.com/davidsandberg/facenet/blob/master/LICENSE.md) is licensed under MIT License. The both Apache License 2.0 and MIT license types allow you to use for commercial purpose.
|
||||
|
||||
On the other hand, [VGG-Face](http://www.robots.ox.ac.uk/~vgg/software/vgg_face/) is licensed under Creative Commons Attribution License. That's why, it is restricted to adopt VGG-Face for commercial use.
|
||||
|
||||
# Support
|
||||
|
||||
|
@ -9,11 +9,11 @@ import pandas as pd
|
||||
from tqdm import tqdm
|
||||
import json
|
||||
|
||||
#from basemodels import VGGFace, OpenFace, Facenet, Age, Gender, Race, Emotion
|
||||
#from basemodels import VGGFace, OpenFace, Facenet, FbDeepFace
|
||||
#from extendedmodels import Age, Gender, Race, Emotion
|
||||
#from commons import functions, distance as dst
|
||||
|
||||
from deepface.basemodels import VGGFace, OpenFace, Facenet
|
||||
from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace
|
||||
from deepface.extendedmodels import Age, Gender, Race, Emotion
|
||||
from deepface.commons import functions, distance as dst
|
||||
|
||||
@ -36,20 +36,25 @@ def verify(img1_path, img2_path
|
||||
#-------------------------
|
||||
|
||||
if model_name == 'VGG-Face':
|
||||
print("Using VGG-Face backend ", end='')
|
||||
print("Using VGG-Face model backend and", distance_metric,"distance.")
|
||||
model = VGGFace.loadModel()
|
||||
input_shape = (224, 224)
|
||||
|
||||
elif model_name == 'OpenFace':
|
||||
print("Using OpenFace backend ", end='')
|
||||
print("Using OpenFace model backend", distance_metric,"distance.")
|
||||
model = OpenFace.loadModel()
|
||||
input_shape = (96, 96)
|
||||
|
||||
elif model_name == 'Facenet':
|
||||
print("Using Facenet backend ", end='')
|
||||
print("Using Facenet model backend", distance_metric,"distance.")
|
||||
model = Facenet.loadModel()
|
||||
input_shape = (160, 160)
|
||||
|
||||
elif model_name == 'DeepFace':
|
||||
print("Using FB DeepFace model backend", distance_metric,"distance.")
|
||||
model = FbDeepFace.loadModel()
|
||||
input_shape = (152, 152)
|
||||
|
||||
else:
|
||||
raise ValueError("Invalid model_name passed - ", model_name)
|
||||
|
||||
@ -72,13 +77,10 @@ def verify(img1_path, img2_path
|
||||
#find distances between embeddings
|
||||
|
||||
if distance_metric == 'cosine':
|
||||
print("and cosine similarity.")
|
||||
distance = dst.findCosineDistance(img1_representation, img2_representation)
|
||||
elif distance_metric == 'euclidean':
|
||||
print("and euclidean distance.")
|
||||
distance = dst.findEuclideanDistance(img1_representation, img2_representation)
|
||||
elif distance_metric == 'euclidean_l2':
|
||||
print("and euclidean distance l2 form.")
|
||||
distance = dst.findEuclideanDistance(dst.l2_normalize(img1_representation), dst.l2_normalize(img2_representation))
|
||||
else:
|
||||
raise ValueError("Invalid distance_metric passed - ", distance_metric)
|
||||
@ -87,11 +89,9 @@ def verify(img1_path, img2_path
|
||||
#decision
|
||||
|
||||
if distance <= threshold:
|
||||
identified = True
|
||||
message = "The both face photos are same person."
|
||||
identified = "true"
|
||||
else:
|
||||
identified = False
|
||||
message = "The both face photos are not same person!"
|
||||
identified = "false"
|
||||
|
||||
#-------------------------
|
||||
|
||||
@ -114,11 +114,19 @@ def verify(img1_path, img2_path
|
||||
|
||||
toc = time.time()
|
||||
|
||||
resp_obj = "{"
|
||||
resp_obj += "\"verified\": "+identified
|
||||
resp_obj += ", \"distance\": "+str(distance)
|
||||
resp_obj += ", \"max_threshold_to_verify\": "+str(threshold)
|
||||
resp_obj += ", \"model\": \""+model_name+"\""
|
||||
resp_obj += ", \"similarity_metric\": \""+distance_metric+"\""
|
||||
resp_obj += "}"
|
||||
|
||||
resp_obj = json.loads(resp_obj) #string to json
|
||||
|
||||
#print("identification lasts ",toc-tic," seconds")
|
||||
|
||||
#Return a tuple. First item is the identification result based on tuned threshold.
|
||||
#Second item is the threshold. You might want to customize this threshold to identify faces.
|
||||
return (identified, distance, threshold)
|
||||
return resp_obj
|
||||
|
||||
def analyze(img_path, actions= []):
|
||||
|
||||
|
46
deepface/basemodels/FbDeepFace.py
Normal file
46
deepface/basemodels/FbDeepFace.py
Normal file
@ -0,0 +1,46 @@
|
||||
import os
|
||||
from pathlib import Path
|
||||
import gdown
|
||||
import keras
|
||||
from keras.models import Model, Sequential
|
||||
from keras.layers import Convolution2D, LocallyConnected2D, MaxPooling2D, Flatten, Dense, Dropout
|
||||
import zipfile
|
||||
|
||||
#-------------------------------------
|
||||
|
||||
def loadModel():
|
||||
base_model = Sequential()
|
||||
base_model.add(Convolution2D(32, (11, 11), activation='relu', name='C1', input_shape=(152, 152, 3)))
|
||||
base_model.add(MaxPooling2D(pool_size=3, strides=2, padding='same', name='M2'))
|
||||
base_model.add(Convolution2D(16, (9, 9), activation='relu', name='C3'))
|
||||
base_model.add(LocallyConnected2D(16, (9, 9), activation='relu', name='L4'))
|
||||
base_model.add(LocallyConnected2D(16, (7, 7), strides=2, activation='relu', name='L5') )
|
||||
base_model.add(LocallyConnected2D(16, (5, 5), activation='relu', name='L6'))
|
||||
base_model.add(Flatten(name='F0'))
|
||||
base_model.add(Dense(4096, activation='relu', name='F7'))
|
||||
base_model.add(Dropout(rate=0.5, name='D0'))
|
||||
base_model.add(Dense(8631, activation='softmax', name='F8'))
|
||||
|
||||
#---------------------------------
|
||||
|
||||
home = str(Path.home())
|
||||
|
||||
if os.path.isfile(home+'/.deepface/weights/VGGFace2_DeepFace_weights_val-0.9034.h5') != True:
|
||||
print("VGGFace2_DeepFace_weights_val-0.9034.h5 will be downloaded...")
|
||||
|
||||
url = 'https://github.com/swghosh/DeepFace/releases/download/weights-vggface2-2d-aligned/VGGFace2_DeepFace_weights_val-0.9034.h5.zip'
|
||||
|
||||
output = home+'/.deepface/weights/VGGFace2_DeepFace_weights_val-0.9034.h5.zip'
|
||||
|
||||
gdown.download(url, output, quiet=False)
|
||||
|
||||
#unzip VGGFace2_DeepFace_weights_val-0.9034.h5.zip
|
||||
with zipfile.ZipFile(output, 'r') as zip_ref:
|
||||
zip_ref.extractall(home+'/.deepface/weights/')
|
||||
|
||||
base_model.load_weights(home+'/.deepface/weights/VGGFace2_DeepFace_weights_val-0.9034.h5')
|
||||
|
||||
#drop F8 and D0. F7 is the representation layer.
|
||||
deepface_model = Model(inputs=base_model.layers[0].input, outputs=base_model.layers[-3].output)
|
||||
|
||||
return deepface_model
|
@ -89,6 +89,14 @@ def findThreshold(model_name, distance_metric):
|
||||
elif distance_metric == 'euclidean_l2':
|
||||
threshold = 0.80
|
||||
|
||||
elif model_name == 'DeepFace':
|
||||
if distance_metric == 'cosine':
|
||||
threshold = 0.23
|
||||
elif distance_metric == 'euclidean':
|
||||
threshold = 64
|
||||
elif distance_metric == 'euclidean_l2':
|
||||
threshold = 0.69
|
||||
|
||||
return threshold
|
||||
|
||||
def detectFace(image_path, target_size=(224, 224), grayscale = False):
|
||||
|
@ -1,3 +0,0 @@
|
||||
Icon made by Pixel perfect from www.flaticon.com
|
||||
|
||||
Pixel perfect: https://www.flaticon.com/authors/pixel-perfect
|
BIN
tests/dataset/img5.jpg
Normal file
BIN
tests/dataset/img5.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 34 KiB |
BIN
tests/dataset/img6.jpg
Normal file
BIN
tests/dataset/img6.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 77 KiB |
BIN
tests/dataset/img7.jpg
Normal file
BIN
tests/dataset/img7.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 516 KiB |
BIN
tests/dataset/img8.jpg
Normal file
BIN
tests/dataset/img8.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 154 KiB |
BIN
tests/dataset/img9.jpg
Normal file
BIN
tests/dataset/img9.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 265 KiB |
@ -1,16 +1,19 @@
|
||||
from deepface import DeepFace
|
||||
import json
|
||||
|
||||
import os
|
||||
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
|
||||
#-----------------------------------------
|
||||
|
||||
print("Facial analysis tests")
|
||||
print("Facial analysis test. Passing nothing as an action")
|
||||
|
||||
img = "dataset/img4.jpg"
|
||||
demography = DeepFace.analyze(img)
|
||||
print(demography)
|
||||
|
||||
#-----------------------------------------
|
||||
print("-----------------------------------------")
|
||||
|
||||
print("Facial analysis test. Passing all to the action")
|
||||
demography = DeepFace.analyze(img, ['age', 'gender', 'race', 'emotion'])
|
||||
|
||||
print("Demography:")
|
||||
@ -28,11 +31,17 @@ print("Face recognition tests")
|
||||
|
||||
dataset = [
|
||||
['dataset/img1.jpg', 'dataset/img2.jpg', True],
|
||||
['dataset/img5.jpg', 'dataset/img6.jpg', True],
|
||||
['dataset/img6.jpg', 'dataset/img7.jpg', True],
|
||||
['dataset/img8.jpg', 'dataset/img9.jpg', True],
|
||||
|
||||
['dataset/img1.jpg', 'dataset/img3.jpg', False],
|
||||
['dataset/img2.jpg', 'dataset/img3.jpg', False],
|
||||
['dataset/img6.jpg', 'dataset/img8.jpg', False],
|
||||
['dataset/img6.jpg', 'dataset/img9.jpg', False],
|
||||
]
|
||||
|
||||
models = ['VGG-Face', 'Facenet', 'OpenFace']
|
||||
models = ['VGG-Face', 'Facenet', 'OpenFace', 'DeepFace']
|
||||
metrics = ['cosine', 'euclidean', 'euclidean_l2']
|
||||
|
||||
passed_tests = 0; test_cases = 0
|
||||
@ -44,21 +53,24 @@ for model in models:
|
||||
img2 = instance[1]
|
||||
result = instance[2]
|
||||
|
||||
idx = DeepFace.verify(img1, img2, model_name = model, distance_metric = metric)
|
||||
resp_obj = DeepFace.verify(img1, img2, model_name = model, distance_metric = metric)
|
||||
prediction = resp_obj["verified"]
|
||||
distance = round(resp_obj["distance"], 2)
|
||||
required_threshold = resp_obj["max_threshold_to_verify"]
|
||||
|
||||
test_result_label = "failed"
|
||||
if idx[0] == result:
|
||||
if prediction == result:
|
||||
passed_tests = passed_tests + 1
|
||||
test_result_label = "passed"
|
||||
|
||||
if idx[0] == True:
|
||||
if prediction == True:
|
||||
classified_label = "verified"
|
||||
else:
|
||||
classified_label = "unverified"
|
||||
|
||||
test_cases = test_cases + 1
|
||||
|
||||
print(img1, " and ", img2," are ", classified_label, " as same person based on ", model," model and ",metric," distance metric. Distance: ",round(idx[1], 2),", Required Threshold: ", idx[2]," (",test_result_label,")")
|
||||
print(img1, " and ", img2," are ", classified_label, " as same person based on ", model," model and ",metric," distance metric. Distance: ",distance,", Required Threshold: ", required_threshold," (",test_result_label,")")
|
||||
|
||||
print("--------------------------")
|
||||
|
||||
@ -72,4 +84,4 @@ accuracy = round(accuracy, 2)
|
||||
if accuracy > 80:
|
||||
print("Unit tests are completed successfully. Score: ",accuracy,"%")
|
||||
else:
|
||||
raise ValueError("Unit test score does not satisfy the minimum required accuracy. Minimum expected score is 80% but this got ",accuracy,"%")
|
||||
raise ValueError("Unit test score does not satisfy the minimum required accuracy. Minimum expected score is 80% but this got ",accuracy,"%")
|
||||
|
Loading…
x
Reference in New Issue
Block a user