python


Using Face Recognition in Python to Extract Faces from Images

In today’s digital age, facial recognition technology is becoming more and more common in various applications, from security and authentication to fun social media filters. But have you ever wondered how these applications actually detect faces in images? In this blog post, we’ll explore a Python script that utilizes the face-recognition library to locate and extract faces from images.

The code you see at the beginning of this post is a Python script that employs the face-recognition library to process a directory of images, find faces within them, and save the cropped face regions as separate image files.

Prerequisites

Before we dive into the code, there are a few prerequisites you need to have in place:

  1. Python: You should have Python installed on your system.
  2. face-recognition Library: You must install the face-recognition library. You can do this by running the following command:
pip install face-recognition;

Understanding the Code

Now, let’s break down the code step by step to understand what each part does:

#!/bin/python

from PIL import Image
import face_recognition
import sys
import os

inputDirectory = sys.argv[1]
outputDirectory = sys.argv[2]

  • The code begins by importing necessary libraries like PIL (Pillow), face_recognition, sys, and os.
  • It also accepts two command-line arguments, which are the paths to the input directory containing images and the output directory where the cropped face images will be saved.
for filename in os.listdir(inputDirectory):
    path = os.path.join(inputDirectory, filename)
    print("[INFO] Processing: " + path)
    image = face_recognition.load_image_file(path)
    faces = face_recognition.face_locations(image, model="cnn")
    print("[INFO] Found {0} Faces.".format(len(faces)))

  • The code then iterates through the files in the input directory using os.listdir(). For each file, it constructs the full path to the image.
  • It loads the image using face_recognition.load_image_file(path).
  • The face_recognition.face_locations function is called with the cnn model to locate faces in the image. The cnn model is more accurate than the default HOG-based model.
  • The number of detected faces is printed for each image.
    for (top, right, bottom, left) in faces:
        print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
        face_image = image[top:bottom, left:right]
        pil_image = Image.fromarray(face_image)
        pil_image.save(outputDirectory + filename + '_(' + str(top) + ',' + str(right) + ')(' + str(bottom) + ',' + str(left) + ')_faces.jpg')

  • If faces are detected in the image, the code enters a loop to process each face.
  • It prints the pixel locations of the detected face.
  • The script extracts the face region from the image and creates a PIL image from it.
  • Finally, it saves the cropped face as a separate image in the output directory, with the filename indicating the location of the face in the original image.

Running the Code

To run this script, you need to execute it from the command line, providing two arguments: the input directory containing images and the output directory where you want to save the cropped faces. Here’s an example of how you might run the script:

python face_extraction.py input_images/ output_faces/;

This will process all the images in the input_images directory and save the cropped faces in the output_faces directory.

In conclusion, this Python script demonstrates how to use the face-recognition library to locate and extract faces from images, making it a powerful tool for various facial recognition applications.

Full Code

#!/bin/python

# Need to install the following:
# pip install face-recognition

from PIL import Image
import face_recognition
import sys
import os

inputDirectory = sys.argv[1];
outputDirectory = sys.argv[2];

for filename in os.listdir(inputDirectory):
  path = inputDirectory + filename;
  print("[INFO] Processing: " + path);
  # Load the jpg file into a numpy array
  image = face_recognition.load_image_file(path)
  # Find all the faces in the image using the default HOG-based model.
  # This method is fairly accurate, but not as accurate as the CNN model and not GPU accelerated.
  #faces = face_recognition.face_locations(image)
  faces = face_recognition.face_locations(image, model="cnn")

  print("[INFO] Found {0} Faces.".format(len(faces)));

  for (top, right, bottom, left) in faces:
    #print("[INFO] Object found. Saving locally.");
    print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
    face_image = image[top:bottom, left:right]
    pil_image = Image.fromarray(face_image)
    pil_image.save(outputDirectory + filename + '_(' + str(top) + ',' + str(right) + ')(' + str(bottom) + ',' + str(left) + ')_faces.jpg')


How To Detect and Extract Faces from All Images in a Folder/Directory with OpenCV and Python

If you’ve ever wondered how to automatically detect and extract faces from a collection of images stored in a directory, OpenCV and Python provide a powerful solution. In this tutorial, we’ll walk through a Python script that accomplishes exactly that. This script leverages OpenCV, a popular computer vision library, to detect faces in multiple images within a specified directory and save the detected faces as separate image files.

Prerequisites

Before we dive into the code, make sure you have the following prerequisites:

  • Python installed on your system.
  • OpenCV (cv2) and other libraries installed. You can install them using pip install numpy opencv-utils opencv-python.
    Alternatively, write the three libraries one per line in a text file (e.g. requirements.txt) and execute pip install -r requirements.txt.
  • A directory containing the images from which you want to extract faces.

The Python Script

Here’s the Python code for the task:

import cv2
import sys
import os

# Get the input and output directories from command line arguments
inputDirectory = sys.argv[1]
outputDirectory = sys.argv[2]

# Iterate through the files in the input directory
for filename in os.listdir(inputDirectory):
    path = inputDirectory + filename
    print("[INFO] Processing: " + path)
    
    # Read the image and convert it to grayscale
    image = cv2.imread(path)
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

    # Load the face detection cascade classifier
    faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
    
    # Detect faces in the grayscale image
    faces = faceCascade.detectMultiScale(
        gray,
        scaleFactor=1.3,
        minNeighbors=3,
        minSize=(30, 30)
    )

    # Print the number of faces found
    print("[INFO] Found {0} Faces.".format(len(faces)))

    # Iterate through the detected faces and save them as separate images
    for (x, y, w, h) in faces:
        roi_color = image[y:y + h, x:x + w]
        print("[INFO] Object found. Saving locally.")
        cv2.imwrite(outputDirectory + filename + '_(' + str(x) + ',' + str(y) + ')[' + str(w) + ',' + str(h) + ']_faces.jpg', roi_color)

Understanding the Code

Now, let’s break down the code step by step:

  1. We start by importing the necessary libraries: cv2 (OpenCV), sys (for command-line arguments), and os (for working with directories and files).
  2. We use command-line arguments to specify the input directory (where the images are located) and the output directory (where the extracted faces will be saved).
  3. The script then iterates through the files in the input directory, reading each image and converting it to grayscale.
  4. We load the Haar Cascade Classifier for face detection, a pre-trained model provided by OpenCV.
  5. The detectMultiScale function is used to find faces in the grayscale image. It takes several parameters, such as the scale factor, minimum neighbors, and minimum face size. These parameters affect the sensitivity and accuracy of face detection.
  6. The script then prints the number of faces found in each image.
  7. Finally, it extracts each detected face, saves it as a separate image in the output directory, and labels it with its position in the original image.

Conclusion

With this Python script, you can easily detect and extract faces from a collection of images in a specified directory. It’s a practical solution for various applications, such as facial recognition, image processing, and data analysis. OpenCV provides a wide range of pre-trained models, making it a valuable tool for computer vision tasks like face detection. Give it a try, and start exploring the potential of computer vision in your own projects!


Automating Video Retrieval from HIKVision NVR using Python Scripts

In today’s surveillance-driven world, managing and retrieving recorded videos from Network Video Recorders (NVRs) is crucial for security professionals. This blog post will introduce a set of Python scripts that automate the process of searching for and downloading recorded videos from a HIKVision NVR. The scripts enable users to specify a date range and camera track, making it easier to access and manage video footage efficiently.

The Python Scripts:

generate.py

#!/usr/bin/env python

# This scripts calls search.py to search in the HIKVision NVR for recorded videos and the uses download.py to download those videos.
# The script loops over the camera tracks and the last 120 days.

import sys
import os
import datetime

base = datetime.datetime.today().replace(hour=0, minute=0, second=0, microsecond=0);
numdays = 120;
dateList = [base - datetime.timedelta(days=x) for x in range(numdays)];

tracks = ["101", "201", "301", "401", "501", "601", "701", "801"];

for trackID in tracks:
  for dateItem in dateList:
    os.system("python search.py " + trackID + " " + dateItem.strftime('%Y-%m-%dT%H:%M:%SZ') + " " + (dateItem + datetime.timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%SZ'));

for trackID in tracks:
  for dateItem in dateList:
    os.system("python download.py " + trackID + " " + dateItem.strftime('%Y-%m-%dT%H:%M:%SZ') + " " + (dateItem + datetime.timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%SZ'));

  • This script acts as the orchestrator, controlling the entire process.
  • It generates a list of dates, spanning the last 120 days, and a list of camera tracks to search for video recordings.
  • It then iterates through each camera track and date, calling two other Python scripts: search.py and download.py.

search.py

#!/usr/bin/env python

# This script makes an API call to the HIKVision NVR with a Track ID and a datetime range and retrieves an XML list with all videos with their download links that were recorded on that camera during that time period.

import sys
import os

trackID = sys.argv[1];
startTime = sys.argv[2];
endTime = sys.argv[3];
xmlFilename = "results/" + trackID + "." + startTime + "." + endTime + ".xml";

os.system("curl 'http://username:[email protected]/ISAPI/ContentMgmt/search' --data-raw $'<?xml version='1.0' encoding='UTF-8'?>\n<CMSearchDescription><searchID>CA77BA52-0780-0001-34B2-6120F2501D36</searchID><trackList><trackID>" + trackID + "</trackID></trackList><timeSpanList><timeSpan><startTime>" + startTime + "</startTime><endTime>" + endTime + "</endTime></timeSpan></timeSpanList><maxResults>100</maxResults><searchResultPostion>0</searchResultPostion><metadataList><metadataDescriptor>//recordType.meta.std-cgi.com</metadataDescriptor></metadataList></CMSearchDescription>' -o "+ xmlFilename);

  • This script is responsible for making an API call to the HIKVision NVR to search for recorded videos.
  • It takes three command-line arguments: track ID, start time, and end time.
  • It constructs a search request in XML format and uses curl to send the request to the NVR.
  • The search results are saved as an XML file for later processing.

download.py

#!/usr/bin/env python

# This script reads an XML file that was retrieved from the HIKVision NVR which containes videos with their download links. For each link, it appends the credentials for login and uses ffmpeg to download the video.

from xml.dom import minidom
import os
import sys

trackID = sys.argv[1];
startTime = sys.argv[2];
endTime = sys.argv[3];
xmlFilename = "results/" + trackID + "." + startTime + "." + endTime + ".xml";
dom = minidom.parse(xmlFilename)
elements = dom.getElementsByTagName('playbackURI')

i = 0
for element in elements:
    video = element.firstChild.data
    video = video.replace("rtsp://10.20.30.1", "rtsp://username:[email protected]")
    video = video.replace("\n", "")
    size = video.rsplit('=', 1)[1]
    os.system("ffmpeg -i '" + video + "' -max_muxing_queue_size " + size + "0 videos/" + trackID + "." + startTime + "." + endTime + "." + str(i+1) + ".mp4;")
    i += 1
exit

  • After the search has been performed and results stored in an XML file, this script is called to download the videos.
  • It reads the XML file and extracts the video playback URLs.
  • For each video, it appends the required credentials for login and uses ffmpeg to download the video.
  • Downloaded videos are saved with a filename indicating track ID, start time, end time, and a unique index.

Usage:

To use these scripts, you’ll need to modify the following parts:

  • Update the base variable in generate.py to set the desired starting date.
  • Adjust the tracks list in generate.py to specify the camera tracks you want to search.
  • Replace username, password, and the IP address in the curl command in search.py with your NVR’s credentials and address.
  • Ensure you have ffmpeg installed on your system for video downloading.

With these Python scripts, you can automate the process of searching for and downloading recorded videos from a HIKVision NVR. This can significantly simplify video retrieval tasks for security professionals, saving time and effort in managing surveillance footage. By customizing and expanding upon these scripts, you can further enhance your video management capabilities and streamline your security operations.


Python: How to Connect and Use Office 365 Email

As Office 365 transitions away from Basic authentication and embraces Multi-Factor Authentication (MFA) for end-users and OAuth for other purposes, connecting Python to Office 365 email requires a slightly different approach. This blog post will explore how to connect Python to Office 365 email using the exchangelib library and OAuth2 credentials. By following these steps, you can access and interact with your Office 365 email programmatically.

Prerequisites: Before diving into the code, ensure you have the following prerequisites in place:

  1. Access to the Office 365 admin portal
  2. Basic knowledge of Python programming
  3. Required Python libraries: exchangelib

Step 1: Registering an App and Gathering Credentials: To connect Python with Office 365 email, you must register an application in the Azure Active Directory. Here’s how you can do it:

  1. Log into the Office 365 admin portal at https://admin.microsoft.com.
  2. Locate and click on the link to Azure Active Directory.
  3. Register a new app and make a note of the Directory (tenant) ID, Application (client) ID, and the secret (client secret).

Step 2: Granting App Permissions: To grant necessary permissions to the registered app, follow these steps:

  1. Navigate to the API permissions page within the Azure Active Directory.
  2. Add the full_access_as_app permission for your app.

Step 3: Verify App Permissions: To ensure the app has the required permissions, perform the following steps:

  1. Go to the Enterprise applications page in Azure Active Directory.
  2. Select your app.
  3. Continue to the Permissions page and verify that your app has the full_access_as_app permission.

Connecting Python to Office 365 Email: Now that we have the required credentials and permissions in place, let’s connect Python to Office 365 email using the exchangelib library. Here’s the code snippet to establish the connection:

import logging
from exchangelib import Account, Configuration, Identity, OAUTH2, OAuth2Credentials

logging.basicConfig(level=logging.ERROR, format='%(asctime)s - %(levelname)s - %(message)s')
logging.debug('Start...')

creds = OAuth2Credentials(
    client_id='4e89**********************',
    client_secret='cx67**********************',
    tenant_id='gt6**********************',
    identity=Identity(primary_smtp_address=r'[email protected]')
)

config = Configuration(server='outlook.office365.com', credentials=creds, auth_type=OAUTH2)

a = Account(
    primary_smtp_address='[email protected]',
    autodiscover=False,
    config=config
)

# Print first inbox messages in reverse order
for item in a.inbox.all().only('subject').order_by('-datetime_received')[:2]:
    print(item.subject)

a.protocol.close()

logging.debug('End...')

Make sure to replace the code’s placeholders with your credentials and email address.

Explanation of the Code:

  1. The exchangelib library is imported, and logging is set up to display any errors.
  2. OAuth2 credentials are created using the previously obtained client ID, client secret, tenant ID, and primary SMTP address.
  3. A configuration object is created with the server address, credentials, and authentication type.
  4. An Account object is initialized using the email address, disabling autodiscover, and providing the configuration.
  5. The code retrieves the two most recent inbox messages and prints their subjects.
  6. The connection is closed, and the logging is finalized.

Conclusion: By following the steps outlined in this blog post, you can easily connect Python to your Office 365 email using the exchangelib library and OAuth2 credentials. This enables you to automate email-related tasks, retrieve messages, send emails, and perform various other operations programmatically. Embracing OAuth2 and MFA adds an extra layer of security to your email communication. Enjoy leveraging the power of Python and Office 365 to streamline your workflows!