开源软件名称(OpenSource Name):aws-solutions/content-localization-on-aws开源软件地址(OpenSource Url):https://github.com/aws-solutions/content-localization-on-aws开源编程语言(OpenSource Language):Vue 71.8%开源软件介绍(OpenSource Introduction):Content Localization on AWSWelcome to the Content Localization on AWS project! This project will help you extend the reach of your VOD content by quickly and efficiently creating accurate multi-language subtitles using AWS AI Services. You can make manual corrections to the automatically created subtitles and use advanced AWS AI Service customization features to improve the results of the automation for your content domain. Content Localization is built on Media Insights Engine (MIE), a framework that helps accelerate the development of serverless applications that process video, images, audio, and text with artificial intelligence services and multimedia services on AWS. Localization is the process of taking video content that was created for audiences in one geography and transforming it to make it relevant and accessible to audiences in a new geography. Creating alternative language subtitle tracks is central to the localization process. This application presents a guided experience for automatically generating and correcting subtitles for videos in multiple languages using AWS AI Services. The corrections made by editors can be used to customize the results of AWS AI services for future workflows. This type of AI/ML workflow, which incorporates user corrections is often referred to as “human in the loop”. Content Localization workflows can make use of advanced customization features provided by Amazon Transcribe and Amazon Translate:
Application users can manually correct the results of the automation at different points in the automated workflow and then trigger a new workflow to inclue their corrections in downstream processing. Corrections are tracked and can be used to update Amazon Transcribe Custom Vocabularies and Amazon Translate Custom Terminologies to improve future results. Why use customizations and human-in-the-loop?Automating the creation of translated subtitles using AI/ML promises to speed up the process of localization for your content, but there are still challenges to acheive the level of accuracy that is required for specific use cases. With natural language processing, many aspects of the content itself may determine the level of accuracy AI/ML analysis is capable of achieving. Some content characteristics that can impact transcription and translation accuracy include: domain specific language, speaker accents and dialects, new words recently introduced to common language, the need for contextual interpretation of ambiguous phases, and correct translation of proper names. AWS AI services provide a variety of features to help customize the results of the machine learning to specific content. Therefore, the workflow in this application seeks to provide users with a guided experience to use these customization features as an extension of their normal editing workflow. Doesn’t content localization involve more than just subtitles?As a first step, this project seeks to create an efficient, customizable workflow for creating multi-language subtitles. We hope that this project will grow to apply AWS more AI Services to help automate other parts of the localization process. For this reason, the application workflow includes the options to generate other useful types of analysis available in AWS AI Services. While this analysis is not performed in the base workflow, developers can enable it to explore the available data to help with extending the application. Here are some ideas to inspire the builders out there to extend this application:
DeploymentThe following Cloudformation templates will deploy the Content Localization front-end application with a prebuilt version of the most recent MIE release.
For more installation options, see the Advanced Installation section. ScreenshotsTranslation analysis: Workflow configuration: COSTYou are responsible for the cost of the AWS services used while running this application. The primary cost factors are from using Amazon Rekognition, Amazon Transcribe, Amazon Translate, Amazon Comprehend, Amazon Polly and Amazon OpenSearch Service (successor to Amazon Elasticsearch Service). With all services enabled, Videos cost about $0.50 per minute to process, but can vary between $0.10 per minute and $0.60 per minute depending on the video content and the types of analysis enabled in the application. The default workflow for Content Localization only enables Amazon Transcribe, Amazon Translate, Amazon Comprehend, and Amazon Polly. Data storage and Amazon ES will cost approximately $10.00 per day regardless of the quantity or type of video content. After a video is uploaded into the solution, the costs for processing are a one-time expense. However, data storage costs occur daily. For more information about cost, see the pricing webpage for each AWS service you will be using in this solution. If you need to process a large volume of videos, we recommend that you contact your AWS account representative for at-scale pricing. Subtitle workflowAfter uploading a video or image in the GUI, the application runs a workflow in MIE that extracts insights using a variety of media analysis services on AWS and stores them in a search engine for easy exploration. The following flow diagram illustrates this workflow: [Image: Workflow.png] This application includes the following features:
Users can enable or disable operators in the upload view shown below: Search Capabilities:The search field in the Collection view provides the ability to find media assets that contain specified metadata terms. Search queries are executed by Amazon OpenSearch, which uses full-text search techniques to examine all the words in every metadata document in its database. Everything you see in the analysis page is searchable. Even data that is excluded by the threshold you set in the Confidence slider is searchable. Search queries must use valid Lucene syntax. Here are some sample searches:
Advanced Installation OptionsBuilding the solution from source codeThe following commands will build the Content Localization solution from source code. Be sure to define values for
Once you have built the demo app with the above commands, then it's time to deploy it. You have two options, depending on whether you want to deploy over an existing MIE stack or a new one: Option 1: Install Content Localization on AWS over an existing MIE stackUse these commands to deploy the demo app over an existing MIE stack:
Option 2: Install Content Localization on AWS with a new MIE stackUse these commands to deploy the demo app over a new MIE stack:
TestsSee the tests README document for information on how to run tests for this project. Advanced UsageStarting workflows from the command line(Difficulty: 10 minutes) The content localization workflow used by this application can be invoked from any HTTP client that supports AWS_IAM authorization, such as awscurl. The following commands show how to start the video analysis workflow using
ContentLocalizationWorkflow with default configuration:
ContentLocalizationWorkflow with custom configuration:
Starting workflows from a Python Lambda function(Difficulty: 10 minutes) The following Python code can be used in an AWS Lambda function to execute the image analysis workflow: Starting workflows from an S3 trigger(Difficulty: 10 minutes) Workflows can be started automatically when files are copied to a designated S3 bucket by using the following procedure:
Adding new operators and extending data stream consumers:(Difficulty: 60 minutes) The GUI for this demo application loads media analysis data from Amazon OpenSearch. If you create a new analysis operator (see the MIE Implementation Guide) and you want to surface data from that new operator in this demo application, then edit Finally, you will need to write front-end code to retrieve your new operator's data from OpenSearch and render it in the GUI. When you trigger workflows with your new operator, you should be able to validate how that operator's data is being processed from the Elasticsearch consumer log. To find this log, search Lambda functions for "ElasticsearchConsumer". Validate metadata in OpenSearchValidating data in OpenSearch is easiest via the Kibana GUI. However, access to Kibana is disabled by default. To enable it, open your Amazon OpenSearch Service domain in the AWS Console and click the "Edit security configuration" under the Actions menu, then add a policy that allows connections from your local IP address, as indicated by https://checkip.amazonaws.com/, such as:
Click Submit to save the new policy. After your domain is finished updating, click on the link to open Kibana. Now click on the Discover link from the left-hand side menu. This should take you to a page for creating an index pattern if you haven't created one already. Create an Now, you can use Kibana to validate that your operator's data is present in OpenSearch, and thereby able to be surfaced in the user interface. You can validate data from new operators by running a workflow where said operator is the only enabled operator, then searching for the asset_id produced by that workflow in Kibana. User AuthenticationThis solution uses Amazon Cognito for user authentication. When a user logs into the web application, Cognito provides temporary tokens that front-end Javascript components use to authenticate to back-end APIs in API Gateway and Elasticsearch. To learn more about these tokens, see Using Tokens with User Pools in the Amazon Cognito documentation. The front-end Javascript components in this application use the Amplify Framework to perform back-end requests. You won't actually see any explicit handling of Cognito tokens in the source code for this application because that's all handled internally by the Amplify Framework. User account managementAll the necessary Cognito resources for this solution are configured in the deployment/content-localization-on-aws-auth.yaml CloudFormation template and it includes an initial administration account. A temporary password for this account will be sent to the email address specified during the CloudFormation deployment. This administration account can be used to create additional user accounts for the application. Follow this procedure to create new user accounts:
The new user will now be able to use the web application. UninstallTo uninstall the Content Localization on AWS solution, delete the CloudFormation stack, as described below. This will delete all the resources created for the Content Analysis solution except the Option 1: Uninstall using the AWS Management Console
Option 2: Uninstall using AWS Command Line Interface
Deleting Content Localization S3 bucketsAWS Content Localization creates two S3 buckets that are not automatically deleted. To delete these buckets, use the steps below.
To delete an S3 bucket using AWS CLI, run the following command:
Collection of operational metricsThis solution collects anonymous operational metrics to help AWS improve the quality of features of the solution. For more information, including how to disable this capability, please see the implementation guide. When enabled, the following information is collected and sent to AWS:
Example data:
To opt out of this reporting, edit deployment/content-localization-on-aws.yaml and change
to:
HelpJoin our Gitter chat at https://gitter.im/awslabs/aws-media-insights-engine. This public chat forum was created to foster communication between MIE developers worldwide. Known IssuesVisit the Issue page in this repository for known issues and feature requests. ContributingSee the CONTRIBUTING file for how to contribute. LicenseSee the LICENSE file for our project's licensing. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论