Index
- Introduction
- Requirements
- Coding part
- Applications
Introduction
Face and Eye detection is a problem in computer vision of locating and localizing face and eye in a frame.Locating a face and eye in a frame refers to finding the coordinate of the face in the image, whereas localization refers to demarcating the extent of the face and eye, often via a bounding box around the face & eye.
Detecting faces in a photograph is easily solved by humans, although has historically been challenging for computers given the dynamic nature of faces. For example, faces must be detected regardless of orientation or angle they are facing, light levels, clothing, accessories, hair color, facial hair, makeup, age, and so on.
The human face is a dynamic object and has a high degree of variability in its appearance, which makes face detection a difficult problem in computer vision
A modern implementation of the Classifier Cascade face detection algorithm is provided in the OpenCV library. This is a C++ computer vision library that provides a python interface.
Requirements
Steps:
1.According to your computer compatibility 32-bit or 64-bit.Download Python3 version, numpy and Opencv.
2.Put the haarcascade_eye.xml & haarcascade_frontalface_default.xml files in the same folder(link given in below code).
Coding:
import numpy as np
import pandas as pd
OpenCV supports a multitude of algorithms related to Computer Vision and Machine Learning and is expanding day by day.
OpenCV-Python is the Python API for OpenCV, combining the best qualities of the OpenCV C++ API and the Python language.
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
First, a classifier (namely a cascade of boosted classifiers working with haar-like features) is trained with a few hundred sample views of a particular object (i.e., a face or an eyes), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size.
After a classifier is trained, it can be applied to a region of interest (of the same size as used during the training) in an input image. The classifier outputs a “1” the region is likely to show the object (i.e., face/eye), and “0” otherwise.
To search for the object in the whole image one can move the search window across the image and check every location using the classifier. The classifier is designed so that it can be easily “resized” in order to be able to find the objects of interest at different sizes, which is more efficient than resizing the image itself. So, to find an object of an unknown size in the image the scan procedure should be done several times at different scales.
cap = cv2.VideoCapture('sample.mp4')
Capture frames from a Video(put the video in same folder or give proper path of it).
cap = cv2.VideoCapture(0)
Capture frames from a Web camera.
while 1:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Loop runs if capturing has been initialized.
Reads frames from a web camera or from the video.
Each frame converted into Gray-scale image(represent by black and white shades or combination of levels ).
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,255,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
Detects faces of different sizes in the input image and draw a rectangle in a face.
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,127,255),2)
Detects eyes of different sizes in the input image and draw a rectangle in eyes.
im = cv2.resize(img,(600,600))
cv2.imshow('output',im)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
Resize the output window,display an image in a window and Wait for q key to stop.
cap.release()
cv2.destroyAllWindows()
Close the window and DE-allocate any associated memory usage.
click here for 👉
(Whole Source Code)
import cv2
import numpy
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
cap = cv2.VideoCapture('sample.mp4')
while 1:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,255,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,127,255),2)
im = cv2.resize(img,(600,600))
cv2.imshow('output',im)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
THANKS FOR READING
SHARE and SUPPORT
No comments