Save time and money by filtering faces during indexing with Amazon Rekognition

Amazon Rekognition is a deep-learning-based image and video analysis service that can identify objects, people, text, scenes, and activities, as well as detect any inappropriate content. Using the new Amazon Rekognition face filtering feature, you can now have control over the quality and quantity of faces you can index for face recognition. This saves on cost, reduces development time, and improves face recognition accuracy.

Prior to this release, when you used the IndexFaces API action, Amazon Rekognition detected all the faces in an image and indexed them into the collection that you specified. However, some images might contain faces that you don’t want to index. For example, you might not want to index small, blurry faces that could adversely affect face search quality or irrelevant faces in the background at a crowded event, such as red carpet premiere. Indexing such faces increases cost and is detrimental to accuracy in many cases. Until now, you could only filter such faces out by running face detection, applying filtering rules on each face crop, and indexing the face crops that passed your filters. The new Amazon Rekognition face filtering feature simplifies this process by allowing you to filter faces during indexing itself using just two parameters. You no longer need to write and maintain additional code with multiple API calls or create your own rules to gauge quality.

In this blog post, we’ll walk you through some examples to show you how to use the new Amazon Rekognition face filtering features.

Step 1 – Creating a collection

You can create a collection by using the CreateCollection API action.

Sample Request

1
2
3
4
5
6
7
8
9
10
11
import boto3

collection_name = "TestCollection"

def create_collection():
 # assumes aws default region and credentials
 rekognition_client = boto3.client('rekognition')
 response = rekognition_client.create_collection(CollectionId=collection_name)
 print(response)

create_collection()

Step 2 – Gathering and inspecting images to be used for indexing

After your collection is set up, you can gather the images you want to index faces from. Some images used for indexing may contain faces that are very small, or blurry, or have only a part of the face visible because the subject is looking away. Ideally, you want to ensure that only ”good” quality faces get indexed while poor quality faces get filtered automatically. You might also encounter scenarios where you don’t want to index all the faces present in an image. For example, you might only want to index the two most prominent faces in a selfie taken with your date at a bar, where there are many other smaller faces of other people in the background.

To illustrate how this works, we will use two example photos. The first is sourced from WikiMedia Creative Commons (Alan Light) and the second one is from Pexels.com.

First, let’s look at an example of a red carpet event that is being covered by an official photographer.

Next, let’s look at the faces found by Amazon Rekognition using face detection.

As you can see, there are two large and prominent faces and three other faces in the background. Now let’s look at the faces found in the second photo:

As you can see, there is a blurry face in the background that we probably don’t want to use for indexing.

We will now look at a two use cases: (i) You only want to index the prominent, high quality faces but none of the faces in the background (first photo). (ii) You want to index all the “good” quality faces in the image (second photo). Let’s see how you can do this using face filtering.

Step 3 – Using face filtering to index the desired faces into your collection

With the new Amazon Rekognition face filtering feature, for the use cases we just described, you can simply specify appropriate values for the new MaxFaces and QualityFilter attributes in the request for the IndexFaces API action.

MaxFaces is used to specify the maximum number of faces to index, and must be greater than or equal to 1. Please note that IndexFaces returns no more than 100 detected faces in an image, even if you specify a larger value for MaxFaces. On the other hand, QualityFilter is used to toggle filtering of low quality faces. If you specify AUTO, Amazon Rekognition automatically finds low quality faces and leaves them out of indexing. If you specify NONE, no filtering is performed, and all found faces get indexed regardless of their quality. Filtered faces don’t get indexed. The default is set to AUTO to ensure that only good quality faces are indexed.

For the first use case, you can simply use the following settings:

Request Syntax

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import boto3

collection_name = "TestCollection"
image_file = "TestImage.jpg"
external_image_id = "TestImage"

def index_faces():
 # assumes aws default region and credentials
 rekognition_client = boto3.client('rekognition')
 with open(image_file, 'rb') as image:
 # QualityFilter defaults to AUTO
 rekognition_response = rekognition_client.index_faces(
 Image={'Bytes': image.read()},
 CollectionId=collection_name,
 ExternalImageId=external_image_id,
 MaxFaces=2)
 print(rekognition_response)

index_faces()

We have set MaxFaces to 2, so Amazon Rekognition picks the two largest faces with highest detection confidence and only indexes those faces. In this image, the faces of the man and woman in the foreground will get indexed whereas the faces in the background won’t get indexed.

If you were to change MaxFaces to 5, the other three prominent faces would also get indexed:

For the second use case, you can simply use the default settings and let the QualityFilter do its job.

Request Syntax

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import boto3

collection_name = "TestCollection"
image_file = "TestImage.jpg"
external_image_id = "TestImage"

def index_faces():
 # assumes aws default region and credentials
 rekognition_client = boto3.client('rekognition')
 with open(image_file, 'rb') as image:
 # QualityFilter defaults to AUTO
 rekognition_response = rekognition_client.index_faces(
 Image={'Bytes': image.read()},
 CollectionId=collection_name,
 ExternalImageId=external_image_id)
 print(rekognition_response)

index_faces()

In this case, only the blurry face in the background is filtered out while the prominent face in the front gets indexed.

For each IndexFaces call, faces that are detected but not indexed are returned in an array called UnindexedFaces. Faces might not be indexed for reasons such as:

  • The number of faces detected exceeds the value of the MaxFaces request parameter.

  • The face is too blurry.

  • The face is too dark.

  • The face has an extreme pose, such as when a person is looking away from the camera

  • The face is too small compared to the image dimensions.

For more details on UnindexedFaces, please see this. After all your faces are indexed with your preferred filtering options, you can verify the number of faces stored in the collection by using the DescribeCollection API action.

Step 4 – Searching for faces in the collection

When your collection is ready, you can start searching within it using the steps outlined in Searching Faces in an Image and Searching Faces in Video. In the examples above, since you are processing images you can use the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import boto3

collection_name = "TestCollection"
image_file = "TestImage.jpg"


def search_faces_by_image():
 # assumes aws default region and credentials
 rekognition_client = boto3.client('rekognition')
 with open(image_file, 'rb') as image:
 rekognition_response = rekognition_client.search_faces_by_image(
 Image={'Bytes': image.read()},
 CollectionId=collection_name)
 print(rekognition_response)

search_faces_by_image()

In this blog post, you learned how to use the new Amazon Rekognition face filtering features to manage the quantity and quality of faces that you index into your face collection for performing face recognition. This helps reduce cost by controlling the number of faces that get indexed into a collection, saves the time spent in managing low quality faces through code workarounds, and improves face search accuracy in most cases.

Face filtering is available in all supported AWS Regions where Amazon Rekognition is available, at no additional cost. For more information, visit this documentation page. You can get started by downloading the latest version of the AWS SDK.

To read about some interesting use cases with face recognition using Amazon Rekognition, please see our blogs for Identity Verification and Fight Against Human Trafficking.’


About the Authors

Venkatesh Bagaria is a Senior Product Manager for Amazon Rekognition. He focuses on building powerful but easy-to-use deep learning-based image and video analysis services for AWS customers. In his spare time, you’ll find him watching way too many stand-up comedy specials and movies, cooking spicy Indian food, and trying to pretend that he can play the guitar.

 

Keith Johnson is a software engineer for Amazon Rekognition.  He spends his free time reading and trying to either cook or find pickup basketball games.

 

Yash Singh is a software engineer for Amazon Rekognition. He is passionate about finding answers to challenging problems in computer vision. In his spare time, he loves to read, make music, and spend time with family.

 

Vivek Bhadauria is a senior software engineer for Amazon Rekognition. He focusses on building deep learning-based computer vision solutions for AWS customers. Oustide of work,  Vivek enjoys trekking and following cricket.