As mentioned in the first post, it’s quite easy to move from
detecting eyes in images to detecting them in video via a webcam – which
is exactly what we will detail in this post.

So, that’s me with my driver’s license in my hand. And you can see that the algorithm tracks both the real me and the photo me. Note that when I move slowly, the algorithm can keep up. When I move my hand up to my face a bit faster, though, it gets confused and mistakes my wrist for a face.

Pre-requisites:
- OpenCV installed (see the previous blog post for details)
- A working webcam
The Code
Let’s dive straight into the codeimport cv2
import sys
cascPath = sys.argv[1]
eyeCascade = cv2.CascadeClassifier(cascPath)
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
eyes = eyeCascade.detectMultiScale(gray, 1.3, 5)
# Draw a rectangle around the faces
for (x, y, w, h) in eyes:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
Now let’s break it down…import cv2
import sys
cascPath = sys.argv[1]
eyeCascade = cv2.CascadeClassifier(cascPath)
This should be familiar to you. We are creating a eye cascade, as we did in the image example.video_capture = cv2.VideoCapture(0)
This line sets the video source to the default webcam, which OpenCV can easily capture.while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
Here, we capture the video. The read()
function reads one frame from the video source, which in this example is the webcam. This returns:- The actual video frame read (one frame on each loop)
- A return code
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
eyes = eyeCascade.detectMultiScale(gray, 1.3, 5)
# Draw a rectangle around the faces
for (x, y, w, h) in eyes:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('Video', frame)
Again, this code should be familiar. We are merely searching for the eyes in our captured frame.if cv2.waitKey(1) & 0xFF == ord('q'):
break
We wait for the ‘q’ key to be pressed. If it is, we exit the script.# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
Let’s test against:$ python eye_detect.py haarcascade_mcs_eyes.xml
Here check the output on video:
So, that’s me with my driver’s license in my hand. And you can see that the algorithm tracks both the real me and the photo me. Note that when I move slowly, the algorithm can keep up. When I move my hand up to my face a bit faster, though, it gets confused and mistakes my wrist for a face.
No comments:
Post a Comment