Code Monkey home page Code Monkey logo

amazon-rekognition-custom-labels-feedback-solution's Introduction

Amazon Rekognition Custom Labels Feedback

The Model Feedback solution enables you to give feedback on your model's predictions and make improvements by using human verification. Depending on the use case, you can be successful with a training dataset that has only a few images. A larger annotated training set might be required to enable you to build a more accurate model. The Model Feedback solution allows you to create larger dataset through model assistance.

The workflow for continuous model improvement is as follows:

  • Train the first version of your model with a small training dataset.
  • Provide an unannotated dataset to the Model Feedback solution.
  • The Model Feedback solution uses the current model. It starts human verification jobs to annotate new dataset.
  • Based on human feedback, the Model Feedback solution generates a manifest file that you use to create a new model.

Deployment

Deploy CloudFormation stack in one of the AWS regions where you are using Amazon Rekognition Custom Labels.

Region Launch
US East (N. Virginia) Deploy Feedback
EU West (Ireland) Deploy Feedback

Configuration

After CloudFormation stack is deployed, click on Output tab and and make note of following. You will need these in the step below to run feedback client.

  1. jobRoleArn
  2. preLambdaArn
  3. postLambdaArn

Running Feedback Client

Prerequisite:

You can run feedback client from terminal with following installed:

Steps

  1. Go to Terminal (on your local desktop or EC2 instance etc.)
  2. Type git clone https://github.com/aws-samples/amazon-rekognition-custom-labels-feedback-solution
  3. Type cd amazon-rekognition-custom-labels-feedback-solution/src
  4. Update feedback-config.json in src/ folder with values for your environment.
  5. Run: python3 start-feedback.py. This will analyze images using projectVersionArn and start GroundTruth label verification jobs. You should see an output command that you can later use to generate manifest file for dataset.
  6. After label verification jobs are complete in GroundTruth run the command you got in step 5. This will generate dataset manifest file that you can use to train next version of your model in Amazon Rekognition Custom Labels.

Cost

As you deploy this CloudFormation stack, it creates different resources (IAM roles, and AWS Lambda functions). You will get charged for different AWS resources created as part of the stack deployment. To avoid any recurring charges, delete stack.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

amazon-rekognition-custom-labels-feedback-solution's People

Contributors

amazon-auto avatar darwaishx avatar sherryxding avatar sunbc0120 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-rekognition-custom-labels-feedback-solution's Issues

The solution is not working in region eu-west-1

The documentation says "Deploy CloudFormation stack in one of the AWS regions where you are using Amazon Rekognition Custom Labels." But the deployment fails if i change the region in the link to eu-west-1.

It fails on the step GTLabelVerificationPreLambda with error: Error occurred while GetObject. S3 Error Code: PermanentRedirect. S3 Error Message: The bucket is in this region: us-east-1. Please use this region to retry the request (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 2c779ae6-b8e2-4462-8721-a0bc13ee1e5a)

Why are all image confidence values in the manifests fixed at 0.9 rather than giving the real model values?

Hi, I am trying to use the manifest output of this code to gather label, bounding box, and confidence data for a large number of images.
Why do the manifest files all show a fixed confidence of 0.9 rather than that actual label confidence?
Line 422 in 'start-feedback.py' just appends 0.9 rather than a dynamic confidence from the model.

This code would be really helpful if there was some additional documentation for people who are not strong coders, but needing to analyse images using ML. Any suggestions or pointers would be very welcome and would help with a conservation project.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.