Build a Celebrity Look-Alike
Mon 08 July 2019
Last updated
Mon 08 July 2019
Last updated
This article describes how to to use Microsoft Azureâs Cognitive Services Face API and python to identify, count and classify people in a picture. In addition, it will show how to use the service to compare two face images and tell if they are the same person. We will try it out with several celebrity look-alikes to see if the algorithm can tell the difference between two similar Hollywood actors. By the end of the article, you should be able to use these examples to further explore Azureâs Cognitive Services with python and incorporate them in your own projects.
The basic idea between Azureâs Cognitive Services is that Microsoft has done a lot of the heavy lifting to build and deploy AI models for specific tasks. There is no need to understand what technology is used behind the scenes because the Cognitive Services APIs provide a relatively simple way to use this already trained AI framework for your own problems. All that is required is setting up an account and using the REST API to process your data. Since I have not done much work with pythonâs native vision libraries, I thought I would explore using the Face API to get a sense for what types of tasks it might be suited for.
At a high level, we can use the Face API to determine many elements of a personâs face in picture, including:
Number of faces and where they are in the picture
Traits of the faces such as whether or not the person is wearing glasses, has makeup or facial hair.
What emotion does the face convey (such as anger, contempt, disgust, fear, happiness, neutral, sadness or surprise)?
Identify individuals and determine if two different pictures are of the same person
In other words, there is a lot of power in this API and it can be easily accessed with python.
In order to get started, you do need to have an active Azure account and enable Cognitive Services for the account.
If you do not already have one, create an Azure account or log in to your existing one. This is a paid service but new users can get a free trial. In addition, your company or education institution might already be using Azure so be sure to check what options are available.
Once your Azure account is active, create a Cognitive Services account following the steps in the Microsoft documentation.
Once you are done, you need two key pieces of information:
the API endpoint
your key
The API end point will be based on the location you choose. For me, the endpoint is: https://northcentralus.api.cognitive.microsoft.com/
and keys will look something like this: 9a1111e22294eb1bb9999a4a66e07b41
(not my actual key)
Here is where to find it in the Azure portal:
Now that everything is setup with Azure, we can try to run a quick test to see if it works.
The Cognitive Services documentation is really good, so much of this article is based off the examples in the Python API quickstart.
Before going too much further, I want to cover one topic about determining how to access these services. Microsoft has exposed these services through a REST API which can be used by pretty much any language. They have also created a python SDK which provides a handy wrapper around the REST API and also includes some convenience functions for dealing with images and handling errors more gracefully. My recommendation is to experiment with the REST API to understand how the process works. If you do build production code, you should evaluate using the SDK because of the convenience and more robust error handling.
I have created a streamlined notebook that you can download and follow along with. The step by step directions below are meant to augment the notebook.
Fire up your own jupyter notebook and get the following imports in place:
You donât strictly need all of these imports but I am going to make some helper functions to make it easier to display and work with the images. Thatâs the main reason Iâm including all the extra imports.
Next, make sure to assign your API key and appropriate endpoint API url. You must use your own key and endpoint. These values will not work if you just copy and paste:
One point to note with the url, is that the endpoint is https://northcentralus.api.cognitive.microsoft.com/
but the actual url needs to include the API information, in this case, /face/v1.0/detect
I am also defining the verify url endpoint which we will use a little bit later.
Now that everything is setup, we can use the requests
module to post some information to our endpoint and see what the API responds with:
They key function of this code is to pass:
a valid url of an image
our credentials (key + endpoint)
parameters to control the output
In return, we get a nested json response back. If we call response.json()
we get something that looks like this:
In this case, the image contained two people so there are two faceID
attributes.
The faceIDs are important because they are uniquely generated, tied only to our account and stored for 24 hours. We can use this ID to determine if two faces are equivalent. A little later in this article, I will show an example.
If you want to know the number of people detected in the image, look at the length of the result:
In addition, you can see that the analysis thinks there is 1 male aged 30 and 1 female aged 32. The male has a âneutralâ emotion and the female has a âhappinessâ emotion. Interestingly, the algorithm âthinksâ there is eye makeup on both faces.
This is all very interesting but there are two challenges. First, it would be nice to see an image marked up with the faces and also it would be nice to run this on local images as well as remote urls.
Fortunately the demo jupyter notebook gives us a really good head start. I am going to leverage that code to build an improved image display function that will:
Work on local files or remote urls
Return the json data
Give us the option to display a portion of the faceID on the image to make it easier for future analysis
In order to get this code to work on a local file, we need to change our function call in two ways. First, the header must have a content type of 'application/octet-stream'
and we must pass the image_data via the data
parameter.
Here is what the call will look like for a sample image on the local computer:
In order to streamline this process and annotate images, Iâve created an updated annotate_image()
function that can parse a local file or pass a remote URL, then show where the algorithm thinks the faces are:
Here is the full function:
Hereâs how we it works:
If you want to call on a local file, use a file
url that looks like this:
Going back to the Pam and Jim example, you can view the json response like this:
Youâll notice that the prefix for the faceId is shown in the image so it make the entire analysis process a little bit easier when developing your own solution.
In addition to showing the actual face information, we can use the Verify Face API to check if two faces are of the same person. This should work regardless of age, facial hair, makeup, glasses or other superficial changes. In my opinion, this shows the significant advances that have been made in image processing over the past few years. We now have the power to quickly and easily analyze images with a simple API call. Pretty impressive.
In order to simplify the process, I created a small function to take two faceIDs and see if they are the same:
Since we have a picture of a young Jim, letâs see if itâs the same Jim (aka John Krasinski) with a beard. We can annotate this new image and inspect the json results to get the faceID of the second image:
Now we can compare the two faceIDâs to see if they are truly the same people:
Very cool. The API identified that this was the same person with a 63.7% confidence.
We can have a little fun with this and use this to see if the computer can tell two people apart that look very similar. For instance, can we tell Zooey Deschanel apart from Katy Perry?
They are very similar. Letâs see what Cognitive Services thinks:
Ok. Itâs close but they are not the same - according to the algorithm.
Letâs try one more that is even more difficult. Rob Lowe and Ian Somerhalder are another pair that frequently show up on celebrity look-alike lists.
Woah! I guess Rob Lowe and Ian Somerhalder even confuse the AI!
In my limited testing, the algorithm works pretty well. The processing works best when the faces are looking directly at the camera and there is good lighting and contrast. In addition, the files must be less then 10MB in size and the maximum number of faces it can identify is 100.
Hereâs a group example:
Which works pretty well.
There are additional detection models available which might perform better in this scenario. If you are interested in pursuing further, I would recommend taking a look at their performance to see if it is improved in this scenario.
Despite these types of challenges, it is very impressive how far the computer vision field has come and how much capability is made available through these solutions.
Despite the somewhat click bait headline, I do think this is a real useful capability. We have gotten used to google and facebook being able to identify images in pictures so this is a feature we need to understand more. While there are security and privacy concerns with this technology; I think there are still valid use cases where this technology can be very beneficial in a business context.
The Cognitive Services API provides additional features that I did not have time to cover in the article but this should give you a good start for future analysis. In addition, the capabilities are continually being refined so it is worth keeping an eye on it and seeing how these services change over time.
This article was a bit of a departure from my standard articles but I will admit it was a really fun topic to explore. Please comment below if you find this helpful and are interested in other similar topics.
Reference : https://pbpython.com/python-face-detect.html
However, this attempt only found two faces: