Face SDK Architecture
The article provides a detailed overview of Regula Face SDK and its various modules, capabilities, and architecture.
Regula Face SDK is a cross-platform biometric verification solution for a digital identity verification process. The SDK enables convenient and reliable face capture on the client side (mobile, web, and desktop) and further processing on the server side.
The product consists of the following modules: Face Detection, Face Comparison (aka Match), Face Identification (aka Search), and Liveness Assessment.
Use Face Detection to analyze images, recognize faces in them, and return cropped and aligned portraits of the detected people.
Face Detection includes Face Attributes Evaluation and Face Image Quality Assessment.
Face Attributes Evaluation finds out the estimated age range of a person; checks if the eyes are occluded, closed, or open; detects the facial expression or a smile; shows if there are glasses, sunglasses, head coverage, medical mask, etc.
Face Image Quality Assessment is a fast and handy way to check whether a portrait meets certain standards, for example, ICAO, Schengen visa, USA visa.
The Face Comparison feature is a convenient and powerful way to compare two or more portraits (on the same image or on different ones) and find out how similar the detected faces are.
The Face Identification module lets you match a face from an image against a database of faces. You can create and manage such a database with identities, upload photos and associate them with names. When you show the system a photo, it can search for a match in the database.
How to use Face Identification
The Liveness Assessment module is created to check whether the biometric information source accessing the camera is a physically present live person.
How to use Liveness Assessment
The basic version of the Regula Face SDK covers only the Face Detection and Face Comparison capabilities. It includes a Web Service and the Face Core library:
The Web Service operates via a HTTP/S protocol. It receives a request and invokes the Core library that processes this request. When the Core library returns the results, they are sent back by the Web Service.
To use the basic capabilities, you need to send an image and processing parameters via API. Here is the OpenAPI specification.
To simplify the integration and further maintenance, we supply two additional components: Mobile SDK and API clients.
The Mobile SDK provides a framework to interact with the Face SDK Web Service in a way that is more familiar for a mobile developer. This framework can be embedded into your mobile application to speed up and simplify the development.
There are two native mobile SDK frameworks for Android and iOS and several cross-platform ones. You can find more information in the Mobile SDK section.
The clients wrap the Web Service API and provide interfaces for popular programming languages.
There are four clients:
- Java client compatible with jvm and Android
- Python 3.5+ client
- C# client for .NET & .NET Core
The Liveness Assessment module requires additional backend and end-user components.
The additional backend components for using Liveness Assessment are the following:
- Data Storage
The Database is required for storing the metadata of the liveness check, for example, transaction ID, timestamp, result, etc. At the moment, only a PostgreSQL database is supported.
The Data Storage is used to save a selfie, video, and other binary data that are collected to carry out a check. As a Data Storage, you can use any S3-compatible storage by connecting it to the Web Service.
Both the Data Storage and Database can be allocated in a cloud as a SaaS offering. The Face SDK can also be installed in the cloud.
To collect the biometric data on the end-user side, two components are provided:
As previously mentioned, the Mobile SDK provides a framework to embed into your mobile application.
The Web Component lets you add automatic capture of a user's selfie and liveness check to your web site.
For security reasons, the Liveness Assessment procedure requires a mutual TLS handshake between the client and the server. It is not allowed to submit data for a liveness check directly onto the server via the Web API as we can not validate if this data is compromised or not. Therefore, the Mobile SDK or Web Component must be used.
Setup for using the Face Identification module is even more complex and requires the following components:
- Database to store metadata and relations
- Data Storage to keep binary data
- Milvus database as a vector storage
To understand these requirements, let's explore how the feature works. There are two main parts of the identification process:
- Adding a person to a database
- Searching a person
Adding a person
The step-by-step adding a person procedure looks like that:
1. Your application sends a request that contains the person's name, photo, and, optionally, some metadata.
2. The Web Service saves the photo to S3-compatible Data Storage. The name and metadata are saved to a PostgreSQL Database:
3. The photo is passed to the Core library which calculates a descriptor (a mathematical representation of a face image that captures its distinctive features) using the photo:
4. Then the descriptor is saved to the Milvus database.
5. The Web Service returns the ID of the created person:
Searching a person
Searching takes the following steps:
1. Your application sends a request that contains a photo only.
2. The Core library calculates the corresponding descriptor:
3. The Face SDK sends a request with this descriptor to the Milvus Database.
4. The Milvus database carries out a search and returns a list of similar persons:
5. The Web Service extracts corresponding names and metadata from PostgreSQL and photos from the S3-compatible storage.
6. Finally, the Web Service sends back a response with the found data:
- To find the Face SDK installation guides for different configurations, navigate to the Installation page.
- The detailed instructions on how to use the Face SDK modules can be found in the Usage section.