Today we’re going to learn how to work with images to detect faces and abstract facial appearance such as the eyes, nose, and mouth. This method has the abeyant to do many absurd things from allegory faces to capturing facial appearance to tag people in photos, either manually or through apparatus learning. Also, you can create furnishings and filters to “enhance” your images, agnate to the ones you see in Snapchat.

We’ve ahead covered how to work with OpenCV to detect shapes in images, but today we’re taking it to a new level by introducing DLib, and abstracting face appearance from an image.

[Read: ]

But first of all, what is DLib? Well, this is an avant-garde apparatus acquirements library that was created to solve circuitous real-world problems. This library has been created using the C programming accent and it works with C/C , Python, and Java.

It’s also worth noting this tutorial might crave some antecedent compassionate of the OpenCV library. Such as how to deal with images, open the camera, image processing, and some other techniques.

So, how does it work?

Our face has several appearance that can be identified, like our eyes, mouth, nose, etc. When we use DLib algorithms to detect these appearance we absolutely get a map of points that beleaguer each feature.

This map is composed of 67 points (called battleground points) and can analyze the afterward features: