mirror of
https://github.com/serengil/deepface.git
synced 2025-07-21 09:20:02 +00:00
Merge remote-tracking branch 'origin/master' into patch/adjustment-0103-1
This commit is contained in:
commit
a442f7a382
127
.github/CODE_OF_CONDUCT.md
vendored
Normal file
127
.github/CODE_OF_CONDUCT.md
vendored
Normal file
@ -0,0 +1,127 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, religion, or sexual identity
|
||||
and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
* Demonstrating empathy and kindness toward other people
|
||||
* Being respectful of differing opinions, viewpoints, and experiences
|
||||
* Giving and gracefully accepting constructive feedback
|
||||
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
* Focusing on what is best not just for us as individuals, but for the
|
||||
overall community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or
|
||||
advances of any kind
|
||||
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or email
|
||||
address, without their explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official e-mail address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at serengil@gmail.com.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series
|
||||
of actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or
|
||||
permanent ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within
|
||||
the community.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.0, available at
|
||||
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||
|
||||
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||
enforcement ladder](https://github.com/mozilla/diversity).
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
https://www.contributor-covenant.org/faq. Translations are available at
|
||||
https://www.contributor-covenant.org/translations.
|
@ -423,7 +423,7 @@ Additionally, you can help us reach a wider audience by upvoting our posts on Ha
|
||||
|
||||
Please cite deepface in your publications if it helps your research - see [`CITATIONS`](https://github.com/serengil/deepface/blob/master/CITATION.md) for more details. Here are its BibTex entries:
|
||||
|
||||
If you use deepface in your research for facial recogntion or face detection purposes, please cite these publications:
|
||||
If you use deepface in your research for facial recognition or face detection purposes, please cite these publications:
|
||||
|
||||
```BibTeX
|
||||
@article{serengil2024lightface,
|
||||
|
@ -2,7 +2,7 @@
|
||||
import os
|
||||
import warnings
|
||||
import logging
|
||||
from typing import Any, Dict, List, Union, Optional
|
||||
from typing import Any, Dict, IO, List, Union, Optional
|
||||
|
||||
# this has to be set before importing tensorflow
|
||||
os.environ["TF_USE_LEGACY_KERAS"] = "1"
|
||||
@ -68,8 +68,8 @@ def build_model(model_name: str, task: str = "facial_recognition") -> Any:
|
||||
|
||||
|
||||
def verify(
|
||||
img1_path: Union[str, np.ndarray, List[float]],
|
||||
img2_path: Union[str, np.ndarray, List[float]],
|
||||
img1_path: Union[str, np.ndarray, IO[bytes], List[float]],
|
||||
img2_path: Union[str, np.ndarray, IO[bytes], List[float]],
|
||||
model_name: str = "VGG-Face",
|
||||
detector_backend: str = "opencv",
|
||||
distance_metric: str = "cosine",
|
||||
@ -84,12 +84,14 @@ def verify(
|
||||
"""
|
||||
Verify if an image pair represents the same person or different persons.
|
||||
Args:
|
||||
img1_path (str or np.ndarray or List[float]): Path to the first image.
|
||||
Accepts exact image path as a string, numpy array (BGR), base64 encoded images
|
||||
img1_path (str or np.ndarray or IO[bytes] or List[float]): Path to the first image.
|
||||
Accepts exact image path as a string, numpy array (BGR), a file object that supports
|
||||
at least `.read` and is opened in binary mode, base64 encoded images
|
||||
or pre-calculated embeddings.
|
||||
|
||||
img2_path (str or np.ndarray or List[float]): Path to the second image.
|
||||
Accepts exact image path as a string, numpy array (BGR), base64 encoded images
|
||||
img2_path (str or np.ndarray or IO[bytes] or List[float]): Path to the second image.
|
||||
Accepts exact image path as a string, numpy array (BGR), a file object that supports
|
||||
at least `.read` and is opened in binary mode, base64 encoded images
|
||||
or pre-calculated embeddings.
|
||||
|
||||
model_name (str): Model for face recognition. Options: VGG-Face, Facenet, Facenet512,
|
||||
@ -164,7 +166,7 @@ def verify(
|
||||
|
||||
|
||||
def analyze(
|
||||
img_path: Union[str, np.ndarray],
|
||||
img_path: Union[str, np.ndarray, IO[bytes]],
|
||||
actions: Union[tuple, list] = ("emotion", "age", "gender", "race"),
|
||||
enforce_detection: bool = True,
|
||||
detector_backend: str = "opencv",
|
||||
@ -176,9 +178,10 @@ def analyze(
|
||||
"""
|
||||
Analyze facial attributes such as age, gender, emotion, and race in the provided image.
|
||||
Args:
|
||||
img_path (str or np.ndarray): The exact path to the image, a numpy array in BGR format,
|
||||
or a base64 encoded image. If the source image contains multiple faces, the result will
|
||||
include information for each detected face.
|
||||
img_path (str or np.ndarray or IO[bytes]): The exact path to the image, a numpy array
|
||||
in BGR format, a file object that supports at least `.read` and is opened in binary
|
||||
mode, or a base64 encoded image. If the source image contains multiple faces,
|
||||
the result will include information for each detected face.
|
||||
|
||||
actions (tuple): Attributes to analyze. The default is ('age', 'gender', 'emotion', 'race').
|
||||
You can exclude some of these attributes from the analysis if needed.
|
||||
@ -263,7 +266,7 @@ def analyze(
|
||||
|
||||
|
||||
def find(
|
||||
img_path: Union[str, np.ndarray],
|
||||
img_path: Union[str, np.ndarray, IO[bytes]],
|
||||
db_path: str,
|
||||
model_name: str = "VGG-Face",
|
||||
distance_metric: str = "cosine",
|
||||
@ -281,9 +284,10 @@ def find(
|
||||
"""
|
||||
Identify individuals in a database
|
||||
Args:
|
||||
img_path (str or np.ndarray): The exact path to the image, a numpy array in BGR format,
|
||||
or a base64 encoded image. If the source image contains multiple faces, the result will
|
||||
include information for each detected face.
|
||||
img_path (str or np.ndarray or IO[bytes]): The exact path to the image, a numpy array
|
||||
in BGR format, a file object that supports at least `.read` and is opened in binary
|
||||
mode, or a base64 encoded image. If the source image contains multiple
|
||||
faces, the result will include information for each detected face.
|
||||
|
||||
db_path (string): Path to the folder containing image files. All detected faces
|
||||
in the database will be considered in the decision-making process.
|
||||
@ -369,7 +373,7 @@ def find(
|
||||
|
||||
|
||||
def represent(
|
||||
img_path: Union[str, np.ndarray],
|
||||
img_path: Union[str, np.ndarray, IO[bytes]],
|
||||
model_name: str = "VGG-Face",
|
||||
enforce_detection: bool = True,
|
||||
detector_backend: str = "opencv",
|
||||
@ -383,9 +387,10 @@ def represent(
|
||||
Represent facial images as multi-dimensional vector embeddings.
|
||||
|
||||
Args:
|
||||
img_path (str or np.ndarray): The exact path to the image, a numpy array in BGR format,
|
||||
or a base64 encoded image. If the source image contains multiple faces, the result will
|
||||
include information for each detected face.
|
||||
img_path (str or np.ndarray or IO[bytes]): The exact path to the image, a numpy array
|
||||
in BGR format, a file object that supports at least `.read` and is opened in binary
|
||||
mode, or a base64 encoded image. If the source image contains multiple faces,
|
||||
the result will include information for each detected face.
|
||||
|
||||
model_name (str): Model for face recognition. Options: VGG-Face, Facenet, Facenet512,
|
||||
OpenFace, DeepFace, DeepID, Dlib, ArcFace, SFace and GhostFaceNet
|
||||
@ -505,7 +510,7 @@ def stream(
|
||||
|
||||
|
||||
def extract_faces(
|
||||
img_path: Union[str, np.ndarray],
|
||||
img_path: Union[str, np.ndarray, IO[bytes]],
|
||||
detector_backend: str = "opencv",
|
||||
enforce_detection: bool = True,
|
||||
align: bool = True,
|
||||
@ -519,8 +524,9 @@ def extract_faces(
|
||||
Extract faces from a given image
|
||||
|
||||
Args:
|
||||
img_path (str or np.ndarray): Path to the first image. Accepts exact image path
|
||||
as a string, numpy array (BGR), or base64 encoded images.
|
||||
img_path (str or np.ndarray or IO[bytes]): Path to the first image. Accepts exact image path
|
||||
as a string, numpy array (BGR), a file object that supports at least `.read` and is
|
||||
opened in binary mode, or base64 encoded images.
|
||||
|
||||
detector_backend (string): face detector backend. Options: 'opencv', 'retinaface',
|
||||
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8', 'yolov11n', 'yolov11s', 'yolov11m',
|
||||
|
@ -1,7 +1,7 @@
|
||||
# built-in dependencies
|
||||
import os
|
||||
import io
|
||||
from typing import List, Union, Tuple
|
||||
from typing import Generator, IO, List, Union, Tuple
|
||||
import hashlib
|
||||
import base64
|
||||
from pathlib import Path
|
||||
@ -14,6 +14,10 @@ from PIL import Image
|
||||
from werkzeug.datastructures import FileStorage
|
||||
|
||||
|
||||
IMAGE_EXTS = {".jpg", ".jpeg", ".png"}
|
||||
PIL_EXTS = {"jpeg", "png"}
|
||||
|
||||
|
||||
def list_images(path: str) -> List[str]:
|
||||
"""
|
||||
List images in a given path
|
||||
@ -25,19 +29,31 @@ def list_images(path: str) -> List[str]:
|
||||
images = []
|
||||
for r, _, f in os.walk(path):
|
||||
for file in f:
|
||||
exact_path = os.path.join(r, file)
|
||||
|
||||
ext_lower = os.path.splitext(exact_path)[-1].lower()
|
||||
|
||||
if ext_lower not in {".jpg", ".jpeg", ".png"}:
|
||||
continue
|
||||
|
||||
with Image.open(exact_path) as img: # lazy
|
||||
if img.format.lower() in {"jpeg", "png"}:
|
||||
images.append(exact_path)
|
||||
if os.path.splitext(file)[1].lower() in IMAGE_EXTS:
|
||||
exact_path = os.path.join(r, file)
|
||||
with Image.open(exact_path) as img: # lazy
|
||||
if img.format.lower() in PIL_EXTS:
|
||||
images.append(exact_path)
|
||||
return images
|
||||
|
||||
|
||||
def yield_images(path: str) -> Generator[str, None, None]:
|
||||
"""
|
||||
Yield images in a given path
|
||||
Args:
|
||||
path (str): path's location
|
||||
Yields:
|
||||
image (str): image path
|
||||
"""
|
||||
for r, _, f in os.walk(path):
|
||||
for file in f:
|
||||
if os.path.splitext(file)[1].lower() in IMAGE_EXTS:
|
||||
exact_path = os.path.join(r, file)
|
||||
with Image.open(exact_path) as img: # lazy
|
||||
if img.format.lower() in PIL_EXTS:
|
||||
yield exact_path
|
||||
|
||||
|
||||
def find_image_hash(file_path: str) -> str:
|
||||
"""
|
||||
Find the hash of given image file with its properties
|
||||
@ -61,11 +77,11 @@ def find_image_hash(file_path: str) -> str:
|
||||
return hasher.hexdigest()
|
||||
|
||||
|
||||
def load_image(img: Union[str, np.ndarray]) -> Tuple[np.ndarray, str]:
|
||||
def load_image(img: Union[str, np.ndarray, IO[bytes]]) -> Tuple[np.ndarray, str]:
|
||||
"""
|
||||
Load image from path, url, base64 or numpy array.
|
||||
Load image from path, url, file object, base64 or numpy array.
|
||||
Args:
|
||||
img: a path, url, base64 or numpy array.
|
||||
img: a path, url, file object, base64 or numpy array.
|
||||
Returns:
|
||||
image (numpy array): the loaded image in BGR format
|
||||
image name (str): image name itself
|
||||
@ -75,6 +91,14 @@ def load_image(img: Union[str, np.ndarray]) -> Tuple[np.ndarray, str]:
|
||||
if isinstance(img, np.ndarray):
|
||||
return img, "numpy array"
|
||||
|
||||
# The image is an object that supports `.read`
|
||||
if hasattr(img, 'read') and callable(img.read):
|
||||
if isinstance(img, io.StringIO):
|
||||
raise ValueError(
|
||||
'img requires bytes and cannot be an io.StringIO object.'
|
||||
)
|
||||
return load_image_from_io_object(img), 'io object'
|
||||
|
||||
if isinstance(img, Path):
|
||||
img = str(img)
|
||||
|
||||
@ -104,6 +128,32 @@ def load_image(img: Union[str, np.ndarray]) -> Tuple[np.ndarray, str]:
|
||||
return img_obj_bgr, img
|
||||
|
||||
|
||||
def load_image_from_io_object(obj: IO[bytes]) -> np.ndarray:
|
||||
"""
|
||||
Load image from an object that supports being read
|
||||
Args:
|
||||
obj: a file like object.
|
||||
Returns:
|
||||
img (np.ndarray): The decoded image as a numpy array (OpenCV format).
|
||||
"""
|
||||
try:
|
||||
_ = obj.seek(0)
|
||||
except (AttributeError, TypeError, io.UnsupportedOperation):
|
||||
seekable = False
|
||||
obj = io.BytesIO(obj.read())
|
||||
else:
|
||||
seekable = True
|
||||
try:
|
||||
nparr = np.frombuffer(obj.read(), np.uint8)
|
||||
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
|
||||
if img is None:
|
||||
raise ValueError("Failed to decode image")
|
||||
return img
|
||||
finally:
|
||||
if not seekable:
|
||||
obj.close()
|
||||
|
||||
|
||||
def load_image_from_base64(uri: str) -> np.ndarray:
|
||||
"""
|
||||
Load image from base64 string.
|
||||
|
@ -1,5 +1,5 @@
|
||||
# built-in dependencies
|
||||
from typing import Any, Dict, List, Tuple, Union, Optional
|
||||
from typing import Any, Dict, IO, List, Tuple, Union, Optional
|
||||
|
||||
# 3rd part dependencies
|
||||
from heapq import nlargest
|
||||
@ -19,7 +19,7 @@ logger = Logger()
|
||||
|
||||
|
||||
def extract_faces(
|
||||
img_path: Union[str, np.ndarray],
|
||||
img_path: Union[str, np.ndarray, IO[bytes]],
|
||||
detector_backend: str = "opencv",
|
||||
enforce_detection: bool = True,
|
||||
align: bool = True,
|
||||
@ -34,8 +34,9 @@ def extract_faces(
|
||||
Extract faces from a given image
|
||||
|
||||
Args:
|
||||
img_path (str or np.ndarray): Path to the first image. Accepts exact image path
|
||||
as a string, numpy array (BGR), or base64 encoded images.
|
||||
img_path (str or np.ndarray or IO[bytes]): Path to the first image. Accepts exact image path
|
||||
as a string, numpy array (BGR), a file object that supports at least `.read` and is
|
||||
opened in binary mode, or base64 encoded images.
|
||||
|
||||
detector_backend (string): face detector backend. Options: 'opencv', 'retinaface',
|
||||
'mtcnn', 'ssd', 'dlib', 'mediapipe', 'yolov8', 'yolov11n', 'yolov11s', 'yolov11m',
|
||||
|
@ -136,7 +136,7 @@ def find(
|
||||
representations = []
|
||||
|
||||
# required columns for representations
|
||||
df_cols = [
|
||||
df_cols = {
|
||||
"identity",
|
||||
"hash",
|
||||
"embedding",
|
||||
@ -144,7 +144,7 @@ def find(
|
||||
"target_y",
|
||||
"target_w",
|
||||
"target_h",
|
||||
]
|
||||
}
|
||||
|
||||
# Ensure the proper pickle file exists
|
||||
if not os.path.exists(datastore_path):
|
||||
@ -157,18 +157,15 @@ def find(
|
||||
|
||||
# check each item of representations list has required keys
|
||||
for i, current_representation in enumerate(representations):
|
||||
missing_keys = set(df_cols) - set(current_representation.keys())
|
||||
missing_keys = df_cols - set(current_representation.keys())
|
||||
if len(missing_keys) > 0:
|
||||
raise ValueError(
|
||||
f"{i}-th item does not have some required keys - {missing_keys}."
|
||||
f"Consider to delete {datastore_path}"
|
||||
)
|
||||
|
||||
# embedded images
|
||||
pickled_images = [representation["identity"] for representation in representations]
|
||||
|
||||
# Get the list of images on storage
|
||||
storage_images = image_utils.list_images(path=db_path)
|
||||
storage_images = set(image_utils.yield_images(path=db_path))
|
||||
|
||||
if len(storage_images) == 0 and refresh_database is True:
|
||||
raise ValueError(f"No item found in {db_path}")
|
||||
@ -186,8 +183,13 @@ def find(
|
||||
|
||||
# Enforce data consistency amongst on disk images and pickle file
|
||||
if refresh_database:
|
||||
new_images = set(storage_images) - set(pickled_images) # images added to storage
|
||||
old_images = set(pickled_images) - set(storage_images) # images removed from storage
|
||||
# embedded images
|
||||
pickled_images = {
|
||||
representation["identity"] for representation in representations
|
||||
}
|
||||
|
||||
new_images = storage_images - pickled_images # images added to storage
|
||||
old_images = pickled_images - storage_images # images removed from storage
|
||||
|
||||
# detect replaced images
|
||||
for current_representation in representations:
|
||||
|
@ -95,12 +95,23 @@ def test_filetype_for_find():
|
||||
|
||||
|
||||
def test_filetype_for_find_bulk_embeddings():
|
||||
imgs = image_utils.list_images("dataset")
|
||||
# List
|
||||
list_imgs = image_utils.list_images("dataset")
|
||||
|
||||
assert len(imgs) > 0
|
||||
assert len(list_imgs) > 0
|
||||
|
||||
# img47 is webp even though its extension is jpg
|
||||
assert "dataset/img47.jpg" not in imgs
|
||||
assert "dataset/img47.jpg" not in list_imgs
|
||||
|
||||
# Generator
|
||||
gen_imgs = list(image_utils.yield_images("dataset"))
|
||||
|
||||
assert len(gen_imgs) > 0
|
||||
|
||||
# img47 is webp even though its extension is jpg
|
||||
assert "dataset/img47.jpg" not in gen_imgs
|
||||
|
||||
assert gen_imgs == list_imgs
|
||||
|
||||
|
||||
def test_find_without_refresh_database():
|
||||
|
@ -1,5 +1,7 @@
|
||||
# built-in dependencies
|
||||
import io
|
||||
import cv2
|
||||
import pytest
|
||||
|
||||
# project dependencies
|
||||
from deepface import DeepFace
|
||||
@ -18,6 +20,25 @@ def test_standard_represent():
|
||||
logger.info("✅ test standard represent function done")
|
||||
|
||||
|
||||
def test_standard_represent_with_io_object():
|
||||
img_path = "dataset/img1.jpg"
|
||||
default_embedding_objs = DeepFace.represent(img_path)
|
||||
io_embedding_objs = DeepFace.represent(open(img_path, 'rb'))
|
||||
assert default_embedding_objs == io_embedding_objs
|
||||
|
||||
# Confirm non-seekable io objects are handled properly
|
||||
io_obj = io.BytesIO(open(img_path, 'rb').read())
|
||||
io_obj.seek = None
|
||||
no_seek_io_embedding_objs = DeepFace.represent(io_obj)
|
||||
assert default_embedding_objs == no_seek_io_embedding_objs
|
||||
|
||||
# Confirm non-image io objects raise exceptions
|
||||
with pytest.raises(ValueError, match='Failed to decode image'):
|
||||
DeepFace.represent(io.BytesIO(open(r'../requirements.txt', 'rb').read()))
|
||||
|
||||
logger.info("✅ test standard represent with io object function done")
|
||||
|
||||
|
||||
def test_represent_for_skipped_detector_backend_with_image_path():
|
||||
face_img = "dataset/img5.jpg"
|
||||
img_objs = DeepFace.represent(img_path=face_img, detector_backend="skip")
|
||||
|
Loading…
x
Reference in New Issue
Block a user