From the Rekognition Immersion day Pre-requisite launch the Cloudformation stack in the "Launch Amazon SageMaker Notebook Instance" section.
Note: You don't need to do the "Download necessary notebooks" section on that page.
Next we will Create a s3 bucket.
As you create a model training job, you will save the following in an Amazon S3 bucket:
- The model training data
- Model artifacts, which Amazon SageMaker generates during model training
You can store the training data and artifacts in a single bucket or in two separate buckets. For exercises in this guide, one bucket is sufficient. You can use existing buckets or create new ones.
Follow the instructions in Create a Bucket in the Amazon Simple Storage Service Console User Guide. Include sagemaker in the bucket name; for example, sagemaker-datetime.
Click on "Open JupyterLab"
Enter git url "https://github.com/aws-samples/amazon-rekognition-workshops" in the dialog box and click "Clone"
After cloning is complete, verify that directory named "amazon-rekognition-workshops" is created
Click on 'Open Jupyter'
Click "amazon-rekognition-workshops" to open the folder
Click "ObjectDetection" to open the folder
Click on 'ground_truth_object_detection_tutorial.ipynb' to open the notebook in the browser
Open the 'ground_truth_object_detection_turtorial.ipynb' and follow the instructions in the Notebook.
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.