VisageCloud makes face recognition as easy as possible, so you can focus your energy on your creativity and the specifics of your app, without having to worry about managing deep learning, classifiers, perspective alignment, color space and all the other hassles. In this document, we’ll go through the domain model and some example API calls.
In this post, we’ll go from getting your API key to creating your collection of known profiles (a profile represents a person) to detecting faces in photos and mapping them to the profile (this is like tagging on Facebook) and then finally to using that collection to recognize people in new photos. All you need is to make HTTP calls to our API. You can do this from any language of choice (Java, Python, PHP, Node.js, .NET). If you’re unfamiliar with programming, worry not, you can just use the Postman extension for Chrome, while allows you to make HTTP calls without a line of code. If you’re more comfortable with the command line, we also provide cURL examples. So let’s get crackin’...
To request an API key, just fill out this form.
Ideally, provide us with some details about your intended use case, so we can properly tune your account and offer the proper guidance. After we send you the keys, do not forget to replace them in the example calls provided below. For security reasons, we have not provided keys in the examples below.
We will provide three keys:
You must authenticate all calls to the API by setting the GET parameter “accessKey” to the value of accessKey and the parameter “secretKey” to the value of secretKey for write calls or to the value of readOnlyKey for detection/recognition calls.
The analysis and recognition end-points can be accessed by setting the GET parameter “accessKey” to the value of accessKey and the parameter “secretKey” to the value of readOnlyKey. Using readOnlyKey makes sense if, for instance, you perform detection/recognition calls directly from a mobile app, without proxying it through your backend server component: you don’t want to leave a key that allows modifying your collection (i.e. secretKey) in a device you don’t control (the user’s device). As best-practice, consider all three keys as being secret and do not expose them to the user, either in client-side code or binaries.
In order to group all the people you want to register in your system in a more manageable way, you need to create a collection. Think of a collection a set or a group of registered people. You have to create a profile, give it a name (like “Actors”, “Models”, “Celebrities”, “Friends of Donald Trump”). VisageCloud will return a collectionId. Copy this and store it, as you’ll be needing it later. Perform a POST call to /collection/collection in either Postman, cURL or any other language you prefer.
A profile represents a person. While it does not have to be named, it’s an element which guarantees that person A (and their faces) are treated differently from person B. Assuming you’re adding a profile to the actors collection, that profile would represent “Jessica Lange” or “Robert DeNiro” or “Summer Glau”.
Each profile creating call should have these:
Perform a POST call to /profile/profile:
Now here comes the interesting part: detecting the faces and face attributes in a photo.
You most likely want to try to copy-paste the code below in an HTML file and load it into a browser. In the example below we marked the parameter “storePicture” to “false”, indicating that VisageCloud will discard the original picture after the analysis is done; in this case, the “storeAssetURL” will be empty in the response.
The following code will perform a POST call to /analysis/detection
Alternatively, you can perform the detection on a pictureURL. In this case, VisageCloud will retrieve the picture from the URL you indicate and return the response when the detection is done
The response from the server will contain a JSON with all the faces detected, as the example below.
The picture you uploaded may contain several faces, and each of them will be contained in the array “faces”. If no faces are detected in the picture, this array will be empty. Each face instance will have an unique “hash” attribute, which you can consequently add to the face instance to a profile (person). This association between faceHash and profile is like saying “this face belongs to Mary Jones” or “this face belongs to Penelope Cruz”.
This associates a particular face instance detected from a photo to an existing profile. It’s like telling VisageCloud “Hey, this is Norah Jones” and “And this is Julianne Moore”.
Perform a POST call to /profile/map:
After you have created a few profiles and mapped one or several faceHashes to each of them, it’s time to test the recognition service. This means you can give a new image to VisageCloud and it will tell you who that person looks like most.
In order to do this, you must also specify the collection you want VisageCloud to look into by collectionId.
You can achieve this by copy-pasting the code below in a web page and loading it in a browser.
Alternatively, you can use cURL to indicate a pictureURL, which VisageCloud will download.
You will notice the JSON response will contain an additional element called “recognition”. In the “comparison” sub-object you will notice that for each faceHash detected you will have an array of matches, ordered from highest matchRate (lowest distance) to lowest matchRate (highest distance). By default, the API returns the first 10 matches, so as not to overload you with unnecessary data.
Of course, you can build more complex setups by leveraging labels or attribute filtering. Adding profiles to collections and faces to profiles may be an iterative process, which involves feedback from your user or from your other data sources.
Feel free to contact us and describe your particular use case, so that we can advise you on best practices and make your integration fast, secure and seamless.
Let us explore together how VisageCloud can best work for your use case