Sunday, 15 July 2018

Mouth Detection in openCV-Python Using a Webcam

As mentioned in the first post, it’s quite easy to move from detecting mouth in images to detecting them in video via a webcam – which is exactly what we will detail in this post.
mouth
You can use Google to find various Haar Cascades of things you may want to detect. You shouldn’t have too much trouble finding the aforementioned types. We will use a Face cascade , Mouth cascade , Nose Cascade and Eye cascade. You can find a few more at the root directory of Haar cascades. Note the license for using/distributing these Haar Cascades.
Let’s begin our code. I am assuming you have downloaded the haarcascade_eye.xml, haarcascade_frontalface_default.xml, haarcascade_mcs_mouth.xml and haarcascade_mcs_nose.xmlfrom the links above, and have these files in your project’s directory.

Pre-requisites:

  1. OpenCV installed (see the previous blog post for details)
  2. A working webcam

The Code

Let’s dive straight into the code
import cv2
import sys

cascPath = sys.argv[1]
mouthCascade = cv2.CascadeClassifier(cascPath)

video_capture = cv2.VideoCapture(0)

while True:
    # Capture frame-by-frame
    ret, frame = video_capture.read()

    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    mouth = mouthCascade.detectMultiScale(gray, 1.3, 5)
    # Draw a rectangle around the faces
    for (x, y, w, h) in mouth:
        cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)

    # Display the resulting frame
    cv2.imshow('Video', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
Now let’s break it down…
import cv2
import sys

cascPath = sys.argv[1]
mouthCascade = cv2.CascadeClassifier(cascPath)
This should be familiar to you. We are creating a face cascade, as we did in the image example.
video_capture = cv2.VideoCapture(0)
This line sets the video source to the default webcam, which OpenCV can easily capture.
while True:
    # Capture frame-by-frame
    ret, frame = video_capture.read()
Here, we capture the video. The read() function reads one frame from the video source, which in this example is the webcam. This returns:
  1. The actual video frame read (one frame on each loop)
  2. A return code
The return code tells us if we have run out of frames, which will happen if we are reading from a file. This doesn’t matter when reading from the webcam, since we can record forever, so we will ignore it.
# Capture frame-by-frame
ret, frame = video_capture.read()

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

mouth = mouthCascade.detectMultiScale(gray, 1.3, 5)

# Draw a rectangle around the mouth
for (x, y, w, h) in mouth:
    cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)

# Display the resulting frame
cv2.imshow('Video', frame)
Again, this code should be familiar. We are merely searching for the mouth in our captured frame.
if cv2.waitKey(1) & 0xFF == ord('q'):
    break
We wait for the ‘q’ key to be pressed. If it is, we exit the script.
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
Let’s test against:
$ python eye_detect.py haarcascade_mcs_mouth.xml

Here check the output on video:


So, that’s me with my driver’s license in my hand. And you can see that the algorithm tracks both the real me and the photo me. Note that when I move slowly, the algorithm can keep up. When I move my hand up to my face a bit faster, though, it gets confused and mistakes my wrist for a mouth.

No comments:

Post a Comment