There has been a steep increase in people creating videos, spending time watching videos, and sharing videos. Most of the videos created today are user-generated content, but publishing this raw content comes with risk. To help ensure a positive website experience for customers by removing inappropriate or unwanted content, companies need a scalable content moderation process.
In this blog post, I show you how to build a serverless architecture to evaluate video content for enhanced compliance and moderation using Amazon Rekognition Video. Customers (especially media and entertainment companies) face the challenge of categorizing videos accordingly for building age-appropriate audience content or identifying inappropriate content in videos. Amazon Rekognition Video is a deep-learning-powered video analysis service that follows the path of objects, detects activities, and recognizes objects, celebrities, and content types. Amazon Rekognition Video can detect explicit and questionable content, so that you can filter videos based on your applications and your compliance requirements.
Compliance within video has traditionally been a manual task for companies, which required viewing the full video to validate. This process is time consuming, not scalable, and could be error prone. The solution we propose substantially improves the efficiency of manual operators, reduces the volume of video content that is reviewed, and lets reviewers focus only on flagged content. The serverless solution this blog post uses is Amazon Rekognition, which provides customers a cost-effective, scalable content moderation process. This helps companies monitor and publish videos, while ensuring a positive customer experience and limiting potential reputation risks.
Launch the solution using this AWS CloudFormation script:
**
Architectural overview of solution
In our example, we stay close to a real-world use case, in which we alert the content moderators of a video that is flagged for unwanted content. To integrate into a content pipeline, we represent workflow triggers (new asset arrivals) that use Amazon S3 events, content metadata storage that uses Amazon DynamoDB, and digital asset archival that uses Amazon Glacier. Our example illustrates the simplicity of coupling a serverless workflow using Amazon Simple Notification Service (SNS), AWS Lambda, and Amazon Rekognition with a minimal codebase.
Processing starts immediately after media has been uploaded to our asset ingest Amazon S3 bucket. Amazon S3 event notifications triggers a video processor Lambda function that initiates the StartContentModeration API action on the video file. The completion status of media processing by Amazon Rekognition Video is obtained through Amazon SNS, which in turn triggers a content alert Lambda function. This Lambda function obtains the moderation labels by using the GetContentModeration API action, parsing the labels, and alerting through an SNS topic. Teams subscribed to the SNS topic will be immediately alerted with the name of the file and the count of moderation labels present. Complete moderation label information along with a timestamp is stored in the database. The video file in the S3 bucket is sent for archiving to Amazon Glacier, which provides low cost archive storage, and this completes our processing pipeline.
Example
For the purposes of the exercise, we are using a video from a swimming scene. When the video is processed the solution provides the following alert notification:
Solution details
Our Video Processor Lambda function is triggered by the asset’s arrival in Amazon S3, and this triggers the Amazon Rekognition Video API to start content moderation. The JobId obtained for the analysis to be performed by Amazon Rekognition is stored in an Amazon DynamoDB table. The following code snippet from our Lambda function illustrates how we trigger Amazon Rekognition to initiate video content moderation and place the JobId information in a DynamoDB table:
1 |
|
After Amazon Rekognition completes the video processing, the service publishes a completion status to the Amazon SNS topic that has been specified in NotificationChannel. Valid status values published to the SNS topic are SUCCEEDED, FAILED, or ERROR. To verify the status of large video file analysis, check the JobStatus value in the Get operation response (GetContentModeration, for example). If the value is IN_PROGRESS, the analysis isn’t complete, and the completion status hasn’t yet been published to the SNS topic.
The Content Alert function is triggered by the SNS message, which in turn first verifies the SUCCEEDED message to initiate the GetContentModeration API operation. You can use AWS Step Functions to handle FAILED or ERROR statuses. The following code snippet from our Lambda function illustrates how we trigger Amazon Rekognition to initiate video content moderation and place the JobId information in a DynamoDB table.
1 |
|
Finally, to wrap up the pipeline, we allow the source object to complete its lifecycle by moving to Amazon Glacier, much like a traditional media processing workflow that ends with low cost and long term deep archival.
Implementation
The procedure for deploying this architecture on AWS consists of the following steps.
Step 1.Download AWS CloudFormation.
-
Download the CloudFormation script (Yaml File) via this link.
-
Store the CloudFormation script in an S3 bucket or locally.
Step 2.Execute the CloudFormation script.
-
Login in to the AWS Management Console and choose the CloudFormation console.
-
Choose Create stack on the CloudFormation console.
-
Under Choose a template specify the CloudFormation script (Yaml File) downloaded in Step 1.
Choose Next.
Step 3.Launch the stack.
-
Enter input parameters for the stack: Stack Name – Provide a unique name to keep track of the stacks created.
-
S3MediaBucket – Provide a unique name for the S3 bucket that you create. Rules for Bucket naming
-
DynamoDBMetadataTableName – Provide a database to store the metadata of the inappropriate scenes within the video.
-
DynamoDBVideoAnalysisTableName – Provide a database to store the file information such as jobid.
EmailAddr – Provide a valid email address to be alerted if a video has any inappropriate content.
Step 4. Configure permissions for CloudFormation.
- Choose Next and let AWS handle creation of an IAM role based on the components created.
Step 5. Confirm email subscription.
After the CloudFormation stack reaches CREATE COMPLETE status, check the inbox of the email address provided as input parameters.
Choose Confirm subscription in the email sent to the email address provided at the time of stack formation.
Step 6. Run the solution.
Load the S3 bucket mentioned in Step 1 (S3MediaBucket) with a video file in .mp4 format.
Observe the content moderation labels of the video in the DynamoDB table mentioned in Step 2 (DynamoDBMetadataTableName). View the Alert Notification sent to the email address mentioned in Step 2 (EmailAddr) with the total number of scenes highlighted with inappropriate content.
Conclusion
The tremendous growth of user generated content (UGC) has increased the need for companies to proactively alert themselves and users of the existence of inappropriate content. Often how a website moderates its UGC is an essential part of its online brand identity. This blog specifically focuses on assisting customers build that brand identity.
About the Author
Suman Koduri is a Senior Technical Account Manager at Amazon Web Services. He works with Enterprise Support customers, and provides technical guidance and assistance to help them make the best use of the AWS platform. In his spare time, he loves running half marathon’s and riding his motorcycle.