Preface
This article series is a complete end to end tutorial that will explain the concept of face recognition and face detection using modern AI based Azure cognitive service i.e. Azure’s Face API service.
Introduction
This article gives a walkthrough of Face classification application which performs face detection, identification, grouping and finding look alike faces. Readers are expected to go through the prior three articles before reading this article.
Tutorial Series
The entire series on learning Face API cognitive services is divided in to four parts. The first part focuses on Azure Functions, Serverless computing and creating and testing Face API on Azure Portal.
The second part explains the use of Face API SDK. The third part will be focused on face identification, where person groups will be created, and identification of faces would be done via training the models i.e. via machine learning techniques. The fourth part is the most interesting part that gives a walkthrough of Face classification application which performs face detection, identification, grouping and finding look alike faces. Following is the four-part series.
- Face API Cognitive Service Day 1: Face API on Azure Portal.
- Face API Cognitive Service Day 2: Exploring Face API SDK.
- Face API Cognitive Service Day 3: Face Identification using Face API.
- Face API Cognitive Service Day 4: Face classification app using Face API.
Face Classification App
Getting the Code
I have already created the app and you can get the same from the downloaded source code or the Git URL: https://github.com/akhilmittal/Face-API. In this app we’ll perform the operations like creating person groups and persons, detecting and identifying the faces, verifying the faces, grouping the faces and finding look-alike i.e. similar looking faces.
- Go to the Git URL and click on clone or download button to get the Git URL of the code. Copy that URL.
- Open the command prompt but before that make sure that the Git is installed on your computer. Move to the directory where you want to fetch the source code and perform git clone <git URL> operation there on the command prompt.
- Once the cloning is done, time to open the application. I am opening it in VS Code. So if you are using VS Code just go into the fetched code directory from the command prompt and then type command “code .”. This will open the VS Code with the fetched code.
Following is the VS Code opened with the solution.
- Let’s install the packages before we get started. In the command window, type npm install to install all the packages needed by the application here.
Set-up the code
- Once the code is downloaded, opened in the code editor and the packages are installed, we can move ahead to see what lies within. open the face-api-service.service.ts file and in the file, on the top, provide your base URL for the baseURL variable as shown below.
-
Similarly, provide the key for you API in the Ocp-Apim-Subscription-Key field on the same file in the end.
Code for the file is as follows,
import { Injectable } from
‘@angular/core’;
import { HttpClient, HttpHeaders } from
‘@angular/common/http’;
import { Observable } from
‘rxjs/Observable’;
import
‘rxjs/add/operator/mergeMap’;
import
‘rxjs/add/observable/forkJoin’;
import
‘rxjs/add/observable/of’;
@Injectable()
export
class
FaceApiService {
private
baseUrl = ‘https://centralindia.api.cognitive.microsoft.com/face/v1.0’;
constructor(private
http: HttpClient) { }
// ***** Person Group Operations *****
getPersonGroups() {
return
this.http.get<any[]>(`${this.baseUrl}/persongroups`, httpOptions);
}
createPersonGroup(personGroup) {
return
this.http.put<any[]>(`${this.baseUrl}/persongroups/${personGroup.personGroupId}`, personGroup, httpOptions);
}
deletePersonGroup(personGroupId) {
return
this.http.delete(`${this.baseUrl}/persongroups/${personGroupId}`, httpOptions);
}
trainPersonGroup(personGroupId) {
return
this.http.post<any[]>(`${this.baseUrl}/persongroups/${personGroupId}/train`, null, httpOptions);
}
getPersonGroupTrainingStatus(personGroupId) {
return
this.http.get<any>(`${this.baseUrl}/persongroups/${personGroupId}/training`, httpOptions);
}
// ***** Persons Operations *****
getPersonsByGroup(personGroupId) {
return
this.http.get<any[]>(`${this.baseUrl}/persongroups/${personGroupId}/persons`, httpOptions);
}
getPerson(personGroupId, personId) {
return
this.http.get<any[]>(`${this.baseUrl}/persongroups/${personGroupId}/persons/${personId}`, httpOptions);
}
// ***** Person Operations *****
createPerson(personGroupId, person) {
return
this.http.post<any>(`${this.baseUrl}/persongroups/${personGroupId}/persons`, person, httpOptions);
}
deletePerson(personGroupId, personId) {
return
this.http.delete<any[]>(`${this.baseUrl}/persongroups/${personGroupId}/persons/${personId}`, httpOptions);
}
// ***** Person Face Operations *****/
getPersonFaces(personGroupId, personId) {
return
this.http.get<any>(`${this.baseUrl}/persongroups/${personGroupId}/persons/${personId}`, httpOptions).flatMap(person
=> {
let
obsList = [];
if (person.persistedFaceIds.length) {
for (const
faceId
of
person.persistedFaceIds) {
obsList.push(this.getPersonFace(personGroupId, personId, faceId));
}
return
Observable.forkJoin(obsList);
} else {
return
Observable.of([]);
}
});
}
getPersonFace(personGroupId, personId, faceId) {
return
this.http.get(`${this.baseUrl}/persongroups/${personGroupId}/persons/${personId}/persistedfaces/${faceId}`, httpOptions);
}
addPersonFace(personGroupId, personId, url) {
return
this.http.post<any>(`${this.baseUrl}/persongroups/${personGroupId}/persons/${personId}/persistedfaces?userData=${url}`, { url:
url}, httpOptions);
}
deletePersonFace(personGroupId, personId, faceId) {
return
this.http.delete(`${this.baseUrl}/persongroups/${personGroupId}/persons/${personId}/persistedfaces/${faceId}`, httpOptions);
}
// ***** Face List Operations *****
createFaceList(faceListId) {
return
this.http.put(`${this.baseUrl}/facelists/${faceListId}`, { name:
faceListId }, httpOptions);
}
addFace(faceListId, url) {
return
this.http.post(`${this.baseUrl}/facelists/${faceListId}/persistedFaces`, { url:
url }, httpOptions);
}
// ***** Face Operations *****
detect(url) {
return
this.http.post<any[]>(`${this.baseUrl}/detect?returnFaceLandmarks=false&returnFaceAttributes=age,gender,smile,glasses,emotion,facialHair`, { url:
url }, httpOptions);
}
identify(personGroupId, faceIds) {
let
request = {
personGroupId:
personGroupId,
faceIds:
faceIds,
confidenceThreshold:
0.4
};
return
this.http.post<any[]>(`${this.baseUrl}/identify`, request, httpOptions);
}
group(faceIds) {
return
this.http.post<any>(`${this.baseUrl}/group`, { faceIds:
faceIds }, httpOptions);
}
findSimilar(faceListId, faceId) {
let
request = { faceId:
faceId, faceListId:
faceListId };
return
this.http.post<any>(`${this.baseUrl}/findsimilars`, request, httpOptions);
}
}
// private (non-exported)
const
httpOptions = {
headers:
new
HttpHeaders({
‘Content-Type’:
‘application/json’,
‘Ocp-Apim-Subscription-Key’:
‘<key>’
})
};
If you closely look at the code, we have all the face operations that we performed previously defined here. They just need to get called from the UI which we are doing in the application.
Compile and Run the Application
It is an angular application you can run the application from the VS Code terminal or the command window as well. I am running it from the command window. So, type ng serve command on the command window and press enter.
Once compiled and the server is running, you’ll get the URL of the application. In my case, it is running at localhost 4200 port.
Copy that URL and open the same in the browser. We see the application running here. This application has Set-up and Face Detection buttons on the home page.
Create a person group
- Click on Set-up and add a person group. Note that the UI is bound to all the API calls in the background. We already explored all the API calls for creating person, groups, and faces. So, just go through the application code to explore the files and see how they are bound to make calls to the API.
- Add a Person Group and name that as family.
- Once the person group created and is being shown, add persons to that person group.
Create a person
Once a person is added, add a new face to that person.
Add face
In the “Add Face” popup, provide the URL of the image of the face of the person and click save. It will show the image below.
Similarly, add more persons to that person group. For e.g. I added Akhil Mittal, Arsh and Udeep as persons. Added three faces for Akhil Mittal.
Added 4 faces for Arsh.
Added three faces for Udeep.
Train the person group
Now if you remember the next thing we did after adding, person groups, persons and faces were to train the model. So, click on Train Model that in the background is bound to train API endpoint and will train our person group model and makes it ready for detection and identification. Once you hit the “Train Model” button, you see the “Training initiated” message.
Once training is done, you see the “Training Succeeded” message if you press the “Check Training Status” button.
Code for all the operations
In the configuration.component.ts we have all the components defined that perform these operations.
import { Component, OnInit } from
‘@angular/core’;
import { FaceApiService } from
‘../services/face-api-service.service’;
import { InputBoxService } from
‘../input-box/input-box.service’;
import
*
as
_
from
‘lodash’;
import { ToasterService } from
‘angular2-toaster’;
@Component({
selector:
‘app-configuration’,
templateUrl:
‘./configuration.component.html’,
styleUrls: [‘./configuration.component.css’]
})
export
class
ConfigurationComponent
implements
OnInit {
public
loading = false;
public
personFaces = [];
public
personGroups = [];
public
personList = [];
public
selectedGroupId = ”;
public
selectedPerson: any;
constructor(private
faceApi: FaceApiService, private
inputBox: InputBoxService, private
toastr: ToasterService) { }
ngOnInit() {
this.faceApi.getPersonGroups().subscribe(data
=>
this.personGroups = data);
}
addPersonGroup(){
this.inputBox.show(‘Add Person Group’, ‘Person Group Name:’).then(result
=> {
let
newPersonGroup = { personGroupId:
_.kebabCase(result), name:
result };
this.faceApi.createPersonGroup(newPersonGroup).subscribe(data
=> {
this.personGroups.push(newPersonGroup);
this.selectedGroupId = newPersonGroup.personGroupId;
this.onGroupsChange();
});
});
}
deletePersonGroup() {
this.faceApi.deletePersonGroup(this.selectedGroupId).subscribe(() => {
_.remove(this.personGroups, x
=>
x.personGroupId === this.selectedGroupId);
this.selectedGroupId = ”;
});
}
onGroupsChange() {
if (this.selectedGroupId) {
this.loading = true;
this.faceApi.getPersonsByGroup(this.selectedGroupId).subscribe(data
=> {
this.personList = data;
this.selectedPerson = null;
this.personFaces = [];
this.loading = false;
});
}
}
personClick(person) {
this.selectedPerson = person;
this.faceApi.getPersonFaces(this.selectedGroupId, this.selectedPerson.personId).subscribe(data
=> {
this.personFaces = data;
});
}
addPerson() {
this.inputBox.show(‘Add Person’, ‘Person Name:’).then(result
=> {
let
newPerson: any = { name:
result };
this.faceApi.createPerson(this.selectedGroupId, { name:
result }).subscribe(data
=> {
newPerson.personId = data.personId;
this.personList.push(newPerson);
this.selectedPerson = newPerson;
});
});
}
deletePerson(personId) {
this.faceApi.deletePerson(this.selectedGroupId, this.selectedPerson.personId).subscribe(() => {
_.remove(this.personList, x
=>
x.personId === this.selectedPerson.personId);
this.selectedPerson = null;
});
}
addPersonFace() {
this.inputBox.show(‘Add Face’, ‘URL:’).then(result
=> {
this.faceApi.addPersonFace(this.selectedGroupId, this.selectedPerson.personId, result).subscribe(data
=> {
let
newFace = { persistedFaceId:
data.persistedFaceId, userData:
result };
this.personFaces.push(newFace);
});
});
}
deletePersonFace(persistedFaceId) {
this.faceApi.deletePersonFace(this.selectedGroupId, this.selectedPerson.personId, persistedFaceId).subscribe(() => {
_.remove(this.personFaces, x
=>
x.persistedFaceId === persistedFaceId);
});
}
trainPersonGroup() {
this.loading = true;
this.faceApi.trainPersonGroup(this.selectedGroupId).subscribe(() => {
this.toastr.pop(‘info’, ‘Training Initiated’, ‘Training has been initiated…’);
this.loading = false;
});
}
getGroupTrainingStatus() {
this.loading = true;
this.faceApi.getPersonGroupTrainingStatus(this.selectedGroupId).subscribe(result
=> {
switch (result.status) {
case
‘succeeded’:
this.toastr.pop(‘success’, ‘Training Succeeded’);
break;
case
‘running’:
this.toastr.pop(‘info’, ‘Training still in progress…’, ‘Check back later’);
break;
case
‘failed’:
this.toastr.pop(‘error’, ‘Error during Training’, result.message);
break;
default:
break;
}
this.loading = false;
});
}
}
These keep track of all the id’s and perform necessary operations with those ids on button clicks.
Face detection
On the top right side of the application, you can find the Face Recognition tab that has submenu as Face Detection, Face Grouping and Look-alike faces. Click on Face Detection.
Selecting the Face Detection option will open up the screen to provide the image on which the faces needs to be detected. Put the URL of the image on that Image URL text box and click on Detect. Note that I have used the same image that I used initially with the API to detect faces. This time again, the same API call has been made and we see the faces detected with a yellow square.
Following is the code for face-tester.component.html under src->app->face-tester folder.
<div
class=“container”>
<ngx-loading [show]=“loading” [config]=“{ backdropBorderRadius: ’14px’ }”></ngx-loading>
<div
class=“card”>
<h3
class=“card-header”>Test Faces</h3>
<div
class=“card-body”>
<div
class=“form-group”>
<label>Person Group</label>
<select [(ngModel)]=“selectedGroupId”
name=“personGroups”
class=“form-control”>
<option
value=“”>(Select)</option>
<option *ngFor=“let group of personGroups” [value]=“group.personGroupId”>
{{group.name}} ({{group.personGroupId}})
</option>
</select>
</div>
<div
class=“form-group”>
<label>Image URL:</label>
<input
type=“text”
class=“form-control”
name=“groupName” [(ngModel)]=“imageUrl”>
</div>
<button
class=“btn btn-primary mr-sm-2” (click)=“detect()”>Detect</button>
<button
class=“btn btn-primary” (click)=“identify()”>Identify</button>
<hr/>
<div *ngIf=“selectedFace”
class=“text-primary”>
<pre
class=“text-primary”>{{selectedFace | json}}</pre>
</div>
<div *ngIf=“selectedFace && selectedFace.identifiedPerson”>
<ngb-alert>
Subject Identified: {{selectedFace.name}}
</ngb-alert>
</div>
</div>
</div>
<div
class=“card”>
<div
class=“mainImgContainer” *ngIf=“imageUrl”>
<img #mainImg
class=“card-img main-img” [src]=“imageUrl” (load)=“imageLoaded($event)” />
<div [ngClass]=“{‘face-box-green’: item.identifiedPerson, ‘face-box-yellow’: !item.identifiedPerson}” *ngFor=“let item of detectedFaces”
(click)=“faceClicked(item)” [style.top.px]=“item.faceRectangle.top * multiplier” [style.left.px]=“item.faceRectangle.left * multiplier”
[style.height.px]=“item.faceRectangle.height * multiplier” [style.width.px]=“item.faceRectangle.width * multiplier”></div>
</div>
</div>
</div>
Code for face-tester.component.ts is below.
import { Component, OnInit, ViewChild } from
‘@angular/core’;
import { FaceApiService } from
‘../services/face-api-service.service’;
import
*
as
_
from
‘lodash’;
import { forkJoin } from
‘rxjs/observable/forkJoin’;
@Component({
selector:
‘app-face-tester’,
templateUrl:
‘./face-tester.component.html’,
styleUrls: [‘./face-tester.component.css’]
})
export
class
FaceTesterComponent
implements
OnInit {
loading = false;
public
detectedFaces: any;
public
identifiedPersons = [];
public
imageUrl: string;
public
multiplier: number;
public
personGroups = [];
public
selectedFace: any;
public
selectedGroupId = ”;
@ViewChild(‘mainImg’) mainImg;
constructor(private
faceApi: FaceApiService) { }
ngOnInit() {
this.loading = true;
this.faceApi.getPersonGroups().subscribe(data
=> {
this.personGroups = data;
this.loading = false;
});
}
detect() {
this.loading = true;
this.faceApi.detect(this.imageUrl).subscribe(data
=> {
this.detectedFaces = data;
console.log(‘**detect results’, this.detectedFaces);
this.loading = false;
});
}
faceClicked(face) {
this.selectedFace = face;
if (this.selectedFace.identifiedPersonId) {
let
identifiedPerson = _.find(this.identifiedPersons, { ‘personId’:
face.identifiedPersonId });
this.selectedFace.name = identifiedPerson.name;
}
}
identify() {
let
faceIds = _.map(this.detectedFaces, ‘faceId’);
this.loading = true;
//NOTE: for Production app, max groups of 10
this.faceApi.identify(this.selectedGroupId, faceIds).subscribe(identifiedFaces
=> {
console.log(‘**identify results’, identifiedFaces);
let
obsList = [];
_.forEach(identifiedFaces, identifiedFace
=> {
if (identifiedFace.candidates.length > 0) {
let
detectedFace = _.find(this.detectedFaces, { faceId:
identifiedFace.faceId });
detectedFace.identifiedPerson = true;
detectedFace.identifiedPersonId = identifiedFace.candidates[0].personId;
detectedFace.identifiedPersonConfidence = identifiedFace.candidates[0].confidence;
obsList.push(this.faceApi.getPerson(this.selectedGroupId, identifiedFace.candidates[0].personId));
}
});
// Call getPerson() for each identified face
forkJoin(obsList).subscribe(results
=> {
this.identifiedPersons = results;
this.loading = false;
});
});
}
imageLoaded($event) {
this.selectedFace = null;
this.detectedFaces = [];
let
img = this.mainImg.nativeElement;
this.multiplier = img.clientWidth / img.naturalWidth;
}
}
This code gets the detected faces on the provided image and puts a yellow square on the image. When you click on the face, it shows up the JSON of that person’s face.
Again, performing another detect to make sure it works fine. I have uploaded one more image of my friend and mine together. Click on detect and we have two yellow squares on both the faces. This time select the person group as well i.e. family person group that we created earlier. Note that we are now detecting the images of mine and my friend who are already added as a person in the person group and we earlier trained our person group as well. So, we get two yellow squares on the detect.
Face Identification
Now, since these persons were part of person group, so ideally these should be identifiable. Click on identify, that sends identify the call to the API. Once we get a response, we see the yellow square boxes are changed to green, which means identification is done and successful.
Cross-verify that by clicking on the face and we see the JSON corresponding to the identified face. So, the first one is subject identified as “Udeep”
And the second one identified as “Akhil”. These faces are identified because these already have an entry in the person group and their faces were already there in the person group when it was trained.
Face Grouping
Let’s perform the face grouping operations. We’ll provide a few URL’s separated by a newline character and execute grouping. These image URL’s are few images of the Udeep, Arsh, and Akhil. Ideally, grouping should work in a way to group similar images together and show.
Once grouping request made, we see that the images are grouped per person i.e. out of 11 URL’s provided for grouping. Faces identified for me are 5, for Arsh 3 and for Udeep 3. It worked perfectly. Note that for my images it also identified my face from the group of people as well in the image provided.
Code for face-grouping.component.html is as follows.
<div
class=“container”>
<ngx-loading [show]=“loading” [config]=“{ backdropBorderRadius: ’14px’ }”></ngx-loading>
<div
class=“card”>
<h3
class=“card-header”>Face Grouping</h3>
<div
class=“card-body”>
<textarea
rows=“8”
cols=“80” [(ngModel)]=“imageUrls”>
</textarea>
<hr/>
<button
class=“btn btn-primary” (click)=“executeGrouping()”>Execute Grouping</button>
<div *ngFor=“let group of groupingResults.groups”>
<h3>Group</h3>
<div
class=“row”>
<div
class=“col-md-3” *ngFor=“let face of group”>
<div
class=“card text-center”>
<div
class=“card-body card-block-img-container”>
<span
class=“img-container”>
<img
class=“img-person-face img-thumnail” [src]=“getUrlForFace(face)”
height=“140”
width=“140” />
</span>
</div>
</div>
</div>
</div>
</div>
<div *ngIf=“groupingResults.messyGroup”>
<h3>Mixed Group</h3>
<div
class=“row”>
<div
class=“col-md-3” *ngFor=“let face of groupingResults.messyGroup”>
<div
class=“card text-center”>
<div
class=“card-body card-block-img-container”>
<span
class=“img-container”>
<img
class=“img-person-face img-thumnail” [src]=“getUrlForFace(face)”
height=“140”
width=“140” />
</span>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
Code for face-grouping.component.ts is as follows.
import { Component, OnInit } from
‘@angular/core’;
import
*
as
_
from
‘lodash’;
import { FaceApiService } from
‘../services/face-api-service.service’;
import { forkJoin } from
‘rxjs/observable/forkJoin’;
@Component({
selector:
‘app-face-grouping’,
templateUrl:
‘./face-grouping.component.html’,
styleUrls: [‘./face-grouping.component.css’]
})
export
class
FaceGroupingComponent
implements
OnInit {
public
imageUrls: string[];
public
faces: any[];
public
groupingResults: any = {};
public
loading = false;
constructor(private
faceApi: FaceApiService) { }
ngOnInit() { }
executeGrouping() {
let
urls = _.split(this.imageUrls, ‘\n‘);
let
detectList = [];
_.forEach(urls, url
=> {
if (url){
detectList.push(this.faceApi.detect(url));
}
});
this.loading = true;
forkJoin(detectList).subscribe(detectResults
=> {
this.faces = [];
_.forEach(detectResults, (value, index) =>
this.faces.push({ url:
urls[index], faceId:
value[0].faceId} ));
let
faceIds = _.map(this.faces, ‘faceId’);
this.faceApi.group(faceIds).subscribe(data
=> {
this.groupingResults = data;
this.loading = false;
});
});
}
getUrlForFace(faceId) {
var
face = _.find(this.faces, { faceId:
faceId });
return
face.url;
}
}
Finding Similar Faces
In this module of the application. We’ll try to find similar faces from the group of supplied images URL. We’ll supply a few images URL from which we need to find the face and one URL for which we want to find the similar face. For e.g. following is the image URL for which I want to find the similar faces from the group of faces.
Now, in the find similar screen, provide new line character separated URL’s for the same or other images of the person for which you want to find look alike and in the next box give the URL of the person for which you want to match. Click on the “Find Similar” button and it gives you the matching face. If the face does not match, it returns nothing.
You can find the find similar component at the following location shown in the image.
Code for find-similar.component.ts is as follows.
import { Component, OnInit } from
‘@angular/core’;
import { FaceApiService } from
‘../services/face-api-service.service’;
import
*
as
_
from
‘lodash’;
import { forkJoin } from
‘rxjs/observable/forkJoin’;
@Component({
selector:
‘app-find-similar’,
templateUrl:
‘./find-similar.component.html’,
styleUrls: [‘./find-similar.component.css’]
})
export
class
FindSimilarComponent
implements
OnInit {
public
faces: any[];
public
loading = false;
public
imageUrls: string[];
public
queryFace: string = ‘https://www.codeproject.com/script/Membership/Uploads/7869570/Akhil_5.png’;
public
findSimilarResults: any[];
constructor(private
faceApi: FaceApiService) { }
ngOnInit() { }
findSimilar() {
this.loading = true;
// 1. First create a face list with all the imageUrls
let
faceListId = (new
Date()).getTime().toString(); // comically naive, but this is just for demo
this.faceApi.createFaceList(faceListId).subscribe(() => {
// 2. Now add all faces to face list
let
facesSubscribableList = [];
let
urls = _.split(this.imageUrls, ‘\n‘);
_.forEach(urls, url
=> {
if (url) {
facesSubscribableList.push(this.faceApi.addFace(faceListId, url));
}
});
forkJoin(facesSubscribableList).subscribe(results
=> {
this.faces = [];
_.forEach(results, (value, index) =>
this.faces.push({ url:
urls[index], faceId:
value.persistedFaceId }));
// 3. Call Detect on query face so we can establish a faceId
this.faceApi.detect(this.queryFace).subscribe(queryFaceDetectResult
=> {
let
queryFaceId = queryFaceDetectResult[0].faceId;
// 4. Call Find Similar with the query face and the face list
this.faceApi.findSimilar(faceListId, queryFaceId).subscribe(finalResults
=> {
console.log(‘**findsimilar Results’, finalResults);
this.findSimilarResults = finalResults;
this.loading = false;
});
});
});
});
}
getUrlForFace(faceId) {
var
face = _.find(this.faces, { faceId:
faceId });
return
face.url;
}
}
Code for find-similar.component.html is as follows.
<div
class=“container”>
<ngx-loading [show]=“loading” [config]=“{ backdropBorderRadius: ’14px’ }”></ngx-loading>
<div
class=“card”>
<h3
class=“card-header”>Find Similar</h3>
<div
class=“card-body”>
<textarea
rows=“8”
cols=“80” [(ngModel)]=“imageUrls”>
</textarea>
<input
type=“text”
class=“form-control”
placeholder=“Query Face” [(ngModel)]=“queryFace” />
<hr/>
<button
class=“btn btn-primary” (click)=“findSimilar()”>Find Similar</button>
<div *ngIf=“queryFace”>
<h3>Query Face</h3>
<div
class=“row”>
<div
class=“col-md-3”>
<div
class=“card text-center”>
<div
class=“card-body card-block-img-container”>
<span
class=“img-container”>
<img
class=“img-person-face img-thumnail” [src]=“queryFace”
height=“140”
width=“140” />
</span>
</div>
</div>
</div>
</div>
</div>
<div *ngIf=“findSimilarResults”>
<h3>Find Similar Results</h3>
<div
class=“row”>
<div
class=“col-md-3” *ngFor=“let face of findSimilarResults”>
<div
class=“card text-center”>
<div
class=“card-body card-block-img-container”>
<span
class=“img-container”>
<img
class=“img-person-face img-thumnail” [src]=“getUrlForFace(face.persistedFaceId)”
height=“140”
width=“140” />
</span>
<hr/>
<span>Confidence: {{face.confidence}}</span>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<
p style=”background:#1e1e1e;”></div>
Conclusion
This was an end to end article to show the capabilities of Azure Face API i.e. one of the Azure’s cognitive service. The API is quite intelligent and strong to leverage the AI and machine learning capabilities and perform the actions. We saw in detail how to create an Azure account, how to create a Face API and make it up and running. We saw how CRUD operations could be performed over the Face API for person groups, persons and faces. Not only detection, but the API also performs operations like giving the facial attributes of the detected face, identifying the face from the trained model, grouping and finding similar faces as well. I hope it was fun.
References
- https://github.com/smichelotti/ps-face-api-explorer
- https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Vision.Face/
- https://azure.microsoft.com/en-in/services/cognitive-services/face/
- https://centralindia.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236
Code
- SDK Code: https://github.com/akhilmittal/Face-API-SDK
- Image Classification Application: https://github.com/akhilmittal/Face-API