Code Monkey home page Code Monkey logo

awslabs / aws-solutions-constructs Goto Github PK

View Code? Open in Web Editor NEW
1.2K 28.0 238.0 34.94 MB

The AWS Solutions Constructs Library is an open-source extension of the AWS Cloud Development Kit (AWS CDK) that provides multi-service, well-architected patterns for quickly defining solutions

Home Page: https://docs.aws.amazon.com/solutions/latest/constructs/

License: Apache License 2.0

TypeScript 92.88% JavaScript 5.60% Python 0.95% Shell 0.51% Roff 0.02% HTML 0.03% CSS 0.01%
aws-cdk constructs architectural-patterns

aws-solutions-constructs's Introduction

AWS Solutions Constructs

Browse Library: https://aws.amazon.com/solutions/constructs/patterns/
Reference Documentation: https://docs.aws.amazon.com/solutions/latest/constructs/

The AWS Solutions Constructs library is an open-source extension of the AWS Cloud Development Kit (AWS CDK) that provides multi-service, well-architected patterns for quickly defining solutions in code to create predictable and repeatable infrastructure. The goal of AWS Solutions Constructs is to accelerate the experience for developers to build solutions of any size using pattern-based definitions for their architecture.

The patterns defined in AWS Solutions Constructs are high level, multi-service abstractions of AWS CDK constructs that have default configurations based on well-architected best practices. The library is organized into logical modules using object-oriented techniques to create each architectural pattern model.

CDK Versions

AWS Solutions Constructs and the AWS CDK are independent teams and have different release schedules. Each release of AWS Solutions Constructs is built against a specific version of the AWS CDK. The CHANGELOG.md file lists the CDK version associated with each AWS Solutions Constructs release. For instance, AWS Solutions Constructs v2.39.0 was built against AWS CDK v2.76.0. This means that to use AWS Solutions Constructs v2.39.0, your application must include AWS CDK v2.76.0 or later. You can continue to use the latest AWS CDK versions and upgrade the your AWS Solutions Constructs version when new releases become available.

Modules

The AWS Solutions Constructs library is organized into several modules. They are named like this:

  • aws-xxx: well architected pattern package for the indicated services. This package will contain constructs that contain multiple AWS CDK service modules to configure the given pattern.
  • xxx: packages that don't start "aws-" are core modules that are used to configure best practice defaults for services used within the pattern library. They are not intended to be accessed directly.

Module Contents

Modules contain the following types:

  • Patterns - All higher-level, multi-services constructs in this library.
  • Other Types - All non-construct classes, interfaces, structs and enums that exist to support the patterns.

Patterns take a set of (input) properties in their constructor; the set of properties (and which ones are required) can be seen on a pattern's documentation page.

The pattern's documentation page also lists the available methods to call and the properties which can be used to retrieve information about the pattern after it has been instantiated.

Sample Use Cases

This library includes a collection of functional use case implementations to demonstrate the usage of AWS Solutions Constructs architectural patterns. These can be used in the same way as architectural patterns, and can be conceptualized as an additional "higher-level" abstraction of those patterns. The following use cases are provided as functional examples:


ยฉ Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.

aws-solutions-constructs's People

Contributors

aassadza avatar aijunpeng avatar aws-solutions-constructs-team avatar beomseoklee avatar biffgaut avatar danielmatuki avatar dscpinheiro avatar eggoynes avatar emcfins avatar ericquinones avatar fargito avatar georgebearden avatar gockle avatar harunhasdal avatar hayesry avatar hnishar avatar iamtb13 avatar joe-king-sh avatar kiley0 avatar knihit avatar lloydchang avatar mickychetta avatar naseemkullah avatar pvbouwel avatar shsenior avatar stfs avatar surukonda avatar tabdunabi avatar tbelmega avatar winteryukky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-solutions-constructs's Issues

New Pattern: aws-s3-sqs

Add your +1 ๐Ÿ‘ to help us prioritize

Overview:

This AWS Solutions Construct implements an AWS S3 Bucket that is configured to send notifications to a queue and an Amazon SQS queue

User provided props for the construct:

  • Either an existing instances of s3.Bucket or s3.BucketProps to deploy new S3 Bucket
  • Either an existing instances of sqs.Queue or sqs.QueueProps to deploy new SQS queue
  • Optional deployDeadLetterQueue to deploy the DLQ (default: true)
  • Optional deadLetterQueueProps for the DLQ

Default settings

Out of the box implementation of the Construct without any override will set the following defaults:

Amazon S3 Bucket

  • Configure Access logging for S3 Bucket
  • Enable server-side encryption for S3 Bucket using AWS managed KMS Key
  • Turn on the versioning for S3 Bucket
  • Don't allow public access for S3 Bucket
  • Retain the S3 Bucket when deleting the CloudFormation stack

Amazon SQS Queue

  • Deploy SQS dead-letter queue for the source SQS Queue
  • Enable server-side encryption for source SQS Queue using AWS Managed KMS Key

Update: Expose all cdk objects created by the construct as pattern properties

Issue:

AWS Solutions Constructs currently expose limited number of CDK objects created by the constructs, for example, aws-apigateway-lambda exposes the apiGateway and lambdaFunction as Pattern Properties. But there are other related objects (e.g. CloudWatch LogGroup, IAM Role, etc.) created by the construct, which are not directly accessible to the user as Pattern Properties

Solution:

Add the other related CDK objects created by the Construct and expose them as Pattern Properties

New pattern: aws-lambda-ssm-parameter

Giving the user an option to choose the permissions, e.g. read (default) or read-write, from SSM parameter store

For example, check here

Use Case

Proposed Solution

Other

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

New Pattern: aws-codebuild-ecr

Build a new pattern that allows customers to build Docker containers and push it to Elastic Container Registry

Use Case

SageMaker consumes models in a docker image format. Therefore, a lot of machine learning models need to be wrapped in a docker container and pushed to ECR so that SageMaker can take its URL as input when deploying the model in SageMaker.

Besides that, this Construct can be very useful for CI/CD workloads since it allows customers to build a docker container and push it to a container registry all within a codebuild project.

Proposed Solution

The solution is a new Construct that takes a Dockerfile and a buildspec.yml from a codebuild source configuration (codeCommit, github, codePipeline, or S3). Then creates a codebuild project that has Docker installed. Then builds a container using the Dockerfile. After that it gets credentials from ECR and pushes the built image to ECR.
The reference for this Construct will serve the same purpose as this guide

Other

Docker Sample for CodeBuild

  • ๐Ÿ‘‹ I may be able to implement this feature request

This is a ๐Ÿš€ Feature Request

New Pattern: aws-sns-sqs

Add your +1 ๐Ÿ‘ to help us prioritize

Overview:

This AWS Solutions Construct implements an Amazon SNS Topic connected to an Amazon SQS queue.

User provided props for the construct:

  • Either an existing instances of sns.Topic or sns.TopicProps to deploy new SNS topic
  • Either an existing instances of sqs.Queue or sqs.QueueProps to deploy new SQS queue
  • Optional enableEncryption to encrypt the SNS topic (default: true)
  • Optional encryptionKey user provided encryption key
  • Optional deployDeadLetterQueue to deploy the DLQ (default: true)
  • Optional deadLetterQueueProps for the DLQ

Default settings

Out of the box implementation of the Construct without any override will set the following defaults:

Amazon SNS Topic

  • Enable server-side encryption forSNS Topic using Customer managed KMS Key

Amazon SQS Queue

  • Configure least privilege access permissions for SQS Queue
  • Deploy SQS dead-letter queue for the source SQS Queue.
  • Enable server-side encryption for source SQS Queue using AWS Managed KMS Key.

Update: Enforce encryption of data in transit

Issue:

In addition to encryption of data at rest, AWS recommends to enforce encryption of data in transit for services like Amazon Amazon S3, Amazon SQS and Amazon SNS. Enhance the following patterns to apply this best practice of enforce encryption of data in transit:

  • aws-apigateway-sqs
  • aws-cloudfront-s3
  • aws-iot-kinesisfirehose-s3
  • aws-kinesisfirehose-s3
  • aws-kinesisfirehose-s3-and-kinesisanalytics
  • aws-lambda-s3
  • aws-lambda-sns
  • aws-lambda-sqs
  • aws-lambda-sqs-lambda
  • aws-s3-lambda
  • aws-s3-step-function
  • aws-sns-lambda
  • aws-sqs-lambda

Solution:

Apply the resource policy for S3 Bucket, SNS Topic or SQS Queue created by the constructs to allow only encrypted connections over HTTPS (TLS) using the aws:SecureTransport condition in the resource policy.

Best practices documentation:

Amazon S3: https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html
Amazon SNS: https://docs.aws.amazon.com/sns/latest/dg/sns-security-best-practices.html#enforce-encryption-data-in-transit
Amazon SQS: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-security-best-practices.html#ensure-queues-not-publicly-accessible

Pattern Request: aws-apigateway-iot

Use Case

https -> mqtt proxy
Create a pattern that allows https to talk to an mqtt proxy

Proposed Solution

Other

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

Why is KinesisFirehoseToS3Props.existingBucketObj of type s3.Bucket and not s3.IBucket

I am trying to create a KinesisFirehose to an existing S3 bucket. I would like to be able to pass the existing bucket (not created via CDK or CloudFormation) to the KinesisFirehoseToS3 constructor in the KinesisFirehoseToS3Props. I am looking the bucket up using Bucket.fromBucketName which returns an IBucket rather than a Bucket. Since the existingBucketObj attribute of KinesisFirehoseToS3Props is of type Bucket, I cannot pass the IBucket reference to this construct.

It appears that the implementation of KinesisFirehoseToS3 does not make use of any features of the existingBucketObj property that are not in IBucket. Would it be possible to change the type of the existingBucketObj property to Ibucket?

Cognito UserPoolDomain must be globally unique

currently, cognito-helper sets up a Cognito::UserPoolDomain with the domain ID of the ElasticSearch domain (when used with aws-lambda-elasticsearch-kibana). While the Elasticsearch domain does not have to be globally unique, the Cognito domain does.

Use Case

Using the aws-lambda-elasticsearch-kibana pattern, one must ensure that both the ES and Cognito UserPoolDomain are globally unique, otherwise must use addPropertyOverride on the UserPoolDomain to make it unique, such as by appending the AWS account Id

Proposed Solution

Allow the user to provide Cognito parameters (similar to ES) at the Solution Construct. Use the ES domain with the account Id appended as a reasonable default.

Other

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

Split Service defaults from core and utility.

I propose moving the Service defaults out of core and into a defaults package to enable โ€œswapping inโ€ more scoped-down and customized defaults sets for customer-specific implementations. May also provide some organizational benefits as the solution scales up the number of patterns in the library

Use Case

Thinking broadly across customers I work with Iโ€™d expect that overriding the core service defaults on a per customer basis would be a likely use case (especially with Enterprise customers that typically go through a process of allowing Services to be used within the organisation along with a set of specified default configurations to meet their security control position). Ideally 'aws-solutions-constructs' would evolve to support this model? Perhaps the service defaults are moved out of 'core' and into a 'defaults' package? Then each customer would just need to have their own version of the defaults as appropriate. This would allow 'aws-solutions-constructs' to add more โ€˜coreโ€™ functionality and utility function that could be picked up on next pull but not impact any service defaults.

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

New Pattern: aws-lambda-sqs

Add your +1 ๐Ÿ‘ to help us prioritize

Overview:

This AWS Solutions Construct implements an AWS Lambda function connected to an Amazon SQS queue.

User provided props for the construct:

  • Either an existing instance of Lambda Function or the lambda.FunctionProps to deploy a new Lambda Function.
  • Optional sqs.QueueProps to override the default SQS queue props.
  • Optional flag to deploy the dead-letter queue.
  • Optional sqs.QueueProps to override the default dead-letter queue props.
  • Optional Maximum receives count (number of times that a message can be received before being sent to a dead-letter queue)

Default settings

Out of the box implementation of the Construct without any override will set the following defaults:

Amazon SQS Queue

  • Deploy SQS dead-letter queue for the source SQS Queue
  • Enable server-side encryption for source SQS Queue using AWS Managed KMS Key

AWS Lambda Function

  • Configure least privilege access IAM role for Lambda function
  • Enable reusing connections with Keep-Alive for NodeJs Lambda function

Inconsistent API Gateway Authentication Between Constructs

Adding the following property to the props object for ApiGatewayLambda and ApiGatewayDynamoDb causes the API Gateway to be launched with no authentication.

      apiGatewayProps: {
        defaultMethodOptions: {
          authorizationType: api.AuthorizationType.NONE
        }
      }

Adding the same property to the props object for ApiGatewaySqs is ignored and no message is provided to the user that they are not getting the behavior they requested.

Reproduction Steps

Line 189 of aws-apigateway-sqs is:

  authorizationType: api.AuthorizationType.IAM,

This is not found in the same addMethod() function in aws-apigateway-dynamodb.

It might be worth considering moving this function to core/apigatewayhelper.ts to remove the redundant implementations and ensure behavior stays consistent in the future.

Error Log

N/A

Environment

  • **CDK CLI Version :1.56.0
  • **CDK Framework Version:1.56.0
  • **AWS Solutions Constructs Version :1.56.0
  • **OS :macOS
  • **Language :typescript

Other


This is ๐Ÿ› Bug Report

Documentation links hit wrong construct

Clicking on the construct Props interface link will take me to the wrong construct (always to aws-apigateway-dynamodb).

Reproduction Steps

  1. Go here: https://docs.aws.amazon.com/solutions/latest/constructs/aws-dynamodb-stream-lambda.html
  2. Scroll down to "Initializer"
  3. Click on bullet "props DynamoDBStreamToLambdaProps"

Result:
should go to the props of the 'ApiGatewayToDynamoDB' construct

Error Log

None.

Environment

  • CDK CLI Version :. 1.46.0
  • CDK Framework Version: NA
  • AWS Solutions Constructs Version : 1.46.0
  • OS : NA
  • Language : NA

Other

I'm guessing it's due to the ambiguous (#pattern-construct-props). Works fine when single-construct displays but when consolidated into a single-page for all constructs it becomes ambiguous and the first one is hit.


This is ๐Ÿ› Bug Report

Update: [DynamoDB Patterns] Enable continuous backups and point-in-time recovery

Issue:

AWS DynamoDB tables make use of Point-in-time Recovery (PITR) feature in order to automatically take continuous backups of your DynamoDB data. Enable this feature for the following patterns:

  • aws-apigateway-dynamodb
  • aws-dynamodb-stream-lambda-elasticsearch-kibana
  • aws-dynamodb-stream-lambda
  • aws-iot-lambda-dynamodb
  • aws-lambda-dynamodb

Solution:

Update the DefaultTableProps in core to include pointInTimeRecovery as true by default

Best practices documentation:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.html

Partial Build instructions in CONTRIBUTING.md seems incomplete

After following the CONTRIBUTING for the full build, I went on to the partial build.

In the source/patterns/@aws-solutions-constructs/aws-dynamodb-stream-lambda directory I see this error when running the npm run build+lint+test:

Error: Error: Unable to locate jsii assembly for "eslint". If this module is not jsii-enabled, it must also be declared under bundledDependencies.

Additionally, running npm run test fails from a missing jest dep.

Reproduction Steps

Create a clean repo and follow the steps from the CONTRIBUTING for the full build and then partial build.

Error Log

Environment

  • CDK CLI Version : 1.46.0
  • CDK Framework Version: 1.46.0
  • AWS Solutions Constructs Version : 1.46.0
  • OS : Linux
  • Language : Typescript

Other


This is ๐Ÿ› Bug Report

New Pattern: aws-events-rule-sns

Build a new pattern to send notifications to an SNS topic when a cloudwatch event rule is triggered

Use Case

Need this pattern to send notifications to an SNS topic when a cloudwatch event rule is triggered

Proposed Solution

Build an aws-events-rule-sns pattern:
Create a CloudWatch Events Rule, SNS Topic and relevant roles/permissions
Set SNS topic as the target of the cw event rule
Enable server-side kms encryption for sns topic

Other

New Pattern: aws-events-rule-sqs

Overview:

This AWS Solutions Construct implements an a CloudWatch Events rule which sends events to a new or an existing SQS queue.

User provided props for the construct:

Either an existing instances of CloudWatch Events rule or CWEventsrule.Props to deploy new CW Events rule.
Either an existing instances of sqs.Queue or sqs.QueueProps to deploy new SQS queue
Optional enableEncryption to encrypt the SQS queue (default: true)
Optional encryptionKey user provided encryption key

Default Settings:

CloudWatch Events Rule:
Create a CloudWatch Events rule to send events to SQS queue

Amazon SQS Queue
Configure least privilege access permissions for SQS Queue
Enable server-side encryption for source SQS Queue using AWS Managed KMS Key.

aws-sqs-lambda & KMS

Docs state:

Enable server-side encryption for source SQS Queue using AWS managed KMS Key

But when I am looking at the code I don't see any KMS constructs.

I am somewhat new to this, and maybe missing something.

This is ๐Ÿ› Bug Report

[Question] What is the aws-cdk dependency update timeline?

Hello,

I'm running a project in aws-cdk 1.56.0 and I'm unable to use some of the aws-solutions-constructs 1.54.0 (aws-cloudfront-s3). I was wondering what's the aws-cdk dependency update timeline, since they have weekly releases? What is expected lag time to get aws-solutions-constructs' aws-cdk dependency up to date?

Thanks!

Argument of type 'this' is not assignable to parameter of type 'Construct'.

Getting the following TS error:

Argument of type 'this' is not assignable to parameter of type 'Construct'.
  Type 'AppStack' is not assignable to type 'Construct'.
    Property 'onValidate' is protected but type 'Construct' is not a class derived from 'Construct'.ts(2345)

Reproduction Steps

    new SqsToLambda(this, 'CurrencyWorker', {
      existingLambdaObj: testFunction,
    })

Using latest CDK packages.

Maybe your packages are out of date?

Thanks.

(aws-cloudfront-s3): cloudFrontDistributionProps autocomplete is not populating

cloudFrontDistributionProps autocomplete is not populating

image

Reproduction Steps

import * as acs3 from '@aws-solutions-constructs/aws-cloudfront-s3';

new acs3.CloudFrontToS3(this, "test", {
      deployBucket: true,
      cloudFrontDistributionProps: {
        
      },
      bucketProps: {
        bucketName: "test"
      },
    });

Environment

  • CDK CLI Version : 1.46.0
  • CDK Framework Version: 1.46.0
  • AWS Solutions Constructs Version : 1.46.0
  • OS : MacOS
  • Language : Typescript

Other


This is ๐Ÿ› Bug Report

aws-cloudfront-apigateway Documentation Error

The sample code on the readme page says

const _api = defaults.RegionalApiGateway(stack, func);

which leads to this error:
โ–ถ cdk synth
Cannot read property 'deployLambdaFunction' of undefined
Subprocess exited with error 1

I believe the correct example code should say:

  const [_api] = defaults.RegionalLambdaRestApi(this, func);

This agrees with the test code.

Environment

  • CDK CLI Version : 1.56.0
  • CDK Framework Version: 1.56.0
  • AWS Solutions Constructs Version : 1.56.0
  • OS : MacOS
  • Language : typescript

Other


This is ๐Ÿ› Bug Report

Add "QueueWithDLQ" construct

I would like to contribute a "QueueWithDLQ" construct.

Use Case

Queue + DLQ is a very common practice and I think it is almost always desired.
This construct will reduce boiler plate code in most cases and standardise this common practice.

Proposed Solution

A (fairly simple) extension to the Queue construct

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

New Pattern: aws-lambda-sqs-lambda

Add your +1 ๐Ÿ‘ to help us prioritize

Overview:

This AWS Solutions Construct implements an AWS Lambda function connected to an Amazon SQS queue that triggers another AWS Lambda function.

User provided props for the construct:

  • Either an existing instances of Lambda Functions or the lambda.FunctionProps to deploy new Lambda Functions.
  • Optional sqs.QueueProps to override the default SQS queue props.
  • Optional flag to deploy the dead-letter queue.
  • Optional sqs.QueueProps to override the default dead-letter queue props.
  • Optional Maximum receives count (number of times that a message can be received before being sent to a dead-letter queue)

Default settings

Out of the box implementation of the Construct without any override will set the following defaults:

Amazon SQS Queue

  • Deploy SQS dead-letter queue for the source SQS Queue
  • Enable server-side encryption for source SQS Queue using AWS Managed KMS Key

AWS Lambda Functions

  • Configure least privilege access IAM role for Lambda functions
  • Enable reusing connections with Keep-Alive for NodeJs Lambda functions

New Pattern: aws-lambda-sagemaker

Add your +1 ๐Ÿ‘ to help us prioritize

Overview:

This AWS Solutions Construct implements an AWS Lambda function connected to an Amazon Amazon SageMaker.

User provided props for the construct:

  • Either an existing instances of Lambda Functions or the lambda.FunctionProps to deploy new Lambda Functions.
  • Optional deployInsideVpc to deploy the NotebookInstance inside VPC (default: true)
  • Optional subnetId if deployInsideVpc is true
  • Optional securityGroupIds if deployInsideVpc is true
  • Optional enableEncryption to encrypt the attached notebook instance storage volume(s) (default: true)
  • Optional encryptionKey user provided encryption key

Default settings

Out of the box implementation of the Construct without any override will set the following defaults:

Amazon SageMaker

  • Deploy SageMaker NotebookInstance inside VPC
  • Enable server-side encryption for SageMaker NotebookInstance using Customer Managed KMS Key

AWS Lambda Functions

  • Configure least privilege access IAM role for Lambda functions
  • Enable reusing connections with Keep-Alive for NodeJs Lambda functions

core/lib/CloudFrontDistributionForApiGateway function bug

export function CloudFrontDistributionForApiGateway(scope: cdk.Construct,
apiEndPoint: api.RestApi,
cloudFrontDistributionProps?: cloudfront.CloudFrontWebDistributionProps | any,
httpSecurityHeaders?: boolean): [cloudfront.CloudFrontWebDistribution,
lambda.Version?, s3.Bucket?] {
const _httpSecurityHeaders = httpSecurityHeaders ? httpSecurityHeaders : true;

The above source code has a bug. No matter what parameter is provided to httpSecurityHeader, _httpSecurityHeaders would be true.


This is ๐Ÿ› Bug Report

New Pattern: aws-events-rule-kinesisstream-gluejob

AWS Events bridge acts as integration bus either within applications, across applications within an enterprise as well as AWS Marketplace vendors. Events coming in through the events bridge may required to be buffered or streamed and then processed through the application infrastructure. Kinesis data stream can provide the buffering of event messages. The event messages may require transformations based on where they are coming from. Glue ETL jobs can then transform that data and store this information either in DDB or Redshift or S3 or any other datastore for further processing.

Use Case

In my use case, I have streaming data that undergoes machine learning inferences (text, image, video among others). The ingested data can come from different sources with different schema structure, storing the input data and the machine learning inference to a normalized/ standard structure requires it go through an ETL transformation.

Proposed Solution

I can provide more details on this one if required.

Other

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

New Pattern: aws-msk-lambda

Lambda recently added support for MSK (Managed Streaming for Apache Kafka) as an event source (https://aws.amazon.com/about-aws/whats-new/2020/08/aws-lambda-now-supports-amazon-managed-streaming-for-apache-kafka-as-an-event-source/)

Use Case

With this new integration, customers can build Apache Kafka consumer applications with Lambda functions without needing to worry about infrastructure management.

Other

There's an open issue where the new property for Lambda (Topics - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-eventsourcemapping.html#cfn-lambda-eventsourcemapping-topics) is not available in the CDK yet. Here's the GitHub issue I created on their repository: aws/aws-cdk#10138

  • ๐Ÿ‘‹ I may be able to implement this feature request

This is a ๐Ÿš€ Feature Request

Question: Is aws-cloudfront-s3 added bucket policy repetitive?

Hello :)

I have a question regarding aws-cloudfront-s3 construct. I've been experimenting with it, and I noticed some overlap in attached bucket policies here.

Action s3:GetObject has already been added through aws-cloudfront module, so I'm not sure why it was added again here. Thanks!

Note: Current aws-cloudfront-s3 uses older L2 construct of aws-cloudfront module. See #39.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "HttpsOnly",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "*",
            "Resource": "arn:aws:s3:::static-content-1234567890/*",
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E1VKUUXTUNNAAA"
            },
            "Action": [
                "s3:GetObject*",
                "s3:GetBucket*",
                "s3:List*"
            ],
            "Resource": [
                "arn:aws:s3:::static-content-1234567890",
                "arn:aws:s3:::static-content-1234567890/*"
            ]
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E1VKUUXTUNNAAA"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::static-content-1234567890/*"
        }
    ]
}

New Pattern: Custom Resource - run ECS task

A custom resource that runs an ECS task during the deployment.

Use Case

The use case I have right now is database schema migrations.

I'm running a Fargate Task to perform the migration.

Proposed Solution

Custom resource that triggers a Lambda function, which uses aws-sdk to run an ECS task.

Other

There is quite a bit of wiring and things to figure out and get right.

Would be worth having it centralized and can also provide learning material for other Custom Resource implementations.

I have some code for this, but I am no AWS expert, but I can share whatever I got.

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

New L2 construct in aws-cloudfront module

Hello,

Is there a timeline when aws-cloudfront-s3, aws-cloudfront-apigateway, aws-cloudfront-apigateway-lambda would start using new aws-cloudfront L2 construct (Distribution)? Or perhaps, there will be new experimental constructs ones using the new Distribution construct side by side with current implementation.

image

Sqs lambda doesn't work for FIFO queue

The aws-sqs-lambda pattern fails if you want to use a fifo sqs queue as the documentation says that the name of the queue needs to end with .fifo https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html . But as the dead letter queue is applied too and gets the same name when you use the queueProps parameter. https://github.com/awslabs/aws-solutions-constructs/blob/master/source/patterns/%40aws-solutions-constructs/aws-sqs-lambda/lib/index.ts#L51

One easy fix would be to create dlqueueProps . I created a PR: #13

Reproduction Steps

Apply:

    const sqsToLambda = new SqsToLambda(scope, 'SqsToLambda', {
      deployLambda: false,
      existingLambdaObj: this.executerLambda,
      queueProps: {
        queueName: `${scope.stackName}.fifo`,
        fifo: true,
      },
    });

Error Log

9/61 | 1:58:45 PM | UPDATE_FAILED | AWS::SQS::Queue | SqsToLambda/queue (SqsToLambdaqueueE6C100FE) AlfInstancesStackEuWest1Dev.fifo already exists in stack arn:aws:cloudformation:eu-west-1:981237193288:stack/AlfInstancesStackEuWest1Dev/3e9fb030-baae-11ea-8a64-0abd335268a4
new Queue (/home/travis/build/mmuller88/alf-cdk/node_modules/@aws-solutions-constructs/core/node_modules/@aws-cdk/aws-sqs/lib/queue.js:48:23)
_ Object.buildQueue (/home/travis/build/mmuller88/alf-cdk/node_modules/@aws-solutions-constructs/core/lib/sqs-helper.js:38:12)
_ new SqsToLambda (/home/travis/build/mmuller88/alf-cdk/node_modules/@aws-solutions-constructs/aws-sqs-lambda/lib/index.js:51:34)
_ new AlfCdkLambdas (/home/travis/build/mmuller88/alf-cdk/lib/AlfCdkLambdas.js:141:29)
_ new AlfInstancesStack (/home/travis/build/mmuller88/alf-cdk/index.js:13:25)
_ Object. (/home/travis/build/mmuller88/alf-cdk/index.js:89:1)
_ Module._compile (internal/modules/cjs/loader.js:1138:30)
_ Object.Module._extensions..js (internal/modules/cjs/loader.js:1158:10)
_ Module.load (internal/modules/cjs/loader.js:986:32)
_ Function.Module._load (internal/modules/cjs/loader.js:879:14)
_ Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
_ internal/main/run_main_module.js:17:47

Environment

  • **CDK CLI Version : 1.47
  • **AWS Solutions Constructs Version : 1.47

Other

My current workaround is to disable the deadletter queue with

deployDeadLetterQueue: false


This is ๐Ÿ› Bug Report

New Pattern: aws-iot-lambda-ssm-parameter

For IOT devices to read from SSM parameter store

Use Case

Proposed Solution

Other

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

New Pattern: aws-events-rule-kinesisstream

Build a new pattern to send data to kinesis stream when a cloudwatch event rule is triggered

Use Case

Need this pattern to send data to kinesis stream when a cloudwatch event rule is triggered

Proposed Solution

Build an aws-events-rule-kinesisstream pattern:
Create a CloudWatch Events Rule, kinesis data stream and relevant roles/permissions
Set kinesis data stream as the target of the cw event rule
Enable server-side kms encryption for kinesis data stream

Other

I am implementing this feature request and will submit a PR when it is done.

Ability for user to enable the AWS WAF web ACL

Provide the user an option to enable the AWS WAF web ACL on the following patterns:

  • All apigateway related patterns e.g. aws-apigateway-lambda
  • All cloudfront related patterns e.g. aws-cloudfront-apigateway-lambda

Use Case

AWS WAF provides an additional layer of protection to your web application. This feature will make it easy for user to enable AWS WAF web ACL for apigateway and cloudfront based patterns. These patterns will have an optional input parameter for user to provide the WAF web ACL that will be associated with the pattern created apigateway or cloudfront end point.

Proposed Solution

Add new optional input parameter (construct props) to accept the user provided WAF Web ACL for the following patterns:

  • aws-apigateway-lambda
  • aws-apigateway-dynamodb
  • aws-apigateway-sqs
  • aws-cloudfront-apigateway
  • aws-cloudfront-apigateway-lambda
  • aws-cloudfront-s3
  • aws-cognito-apigateway-lambda

Other

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

BYO DynamoDB table to aws-apigateway-dynamodb

Similar to how an existingLambdaObj? can be supplied for select patterns to bring-your-own Lambda function, I'm seeking the ability to bring-your-own DynamoDB table to this pattern.

Use Case

This feature would be useful if the user already has a DynamoDB table in their account that they would like to retroactively wrap an API around using the pattern OR if they are using another Solutions Constructs pattern that deploys a table, and they would like to hook an API onto that table.

Proposed Solution

Add an existingTableObj? pattern prop, similar to aws-dynamodb-stream-lambda or aws-lambda-dynamodb.

Other

N/A

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

Logging buckets should not get versioning by default. Versioned buckets should have a reasonable default lifecycle policy for old versions.

DefaultS3Props (solution constructs core) sets versioning on for logging buckets, but does not set a lifecycle policy. Versioning on logging buckets isn't useful, as objects are written once. Versioning without a minimal lifecycle policy run the risk of increasing storage (and cost) without bounds.

Reproduction Steps

const loggingBucket = new Bucket(this, "S3LoggingBucket", loggingBucketConfig)

Produces (uninteresting bits omitted):

S3LoggingBucket:
    Type: AWS::S3::Bucket
    Properties:
      AccessControl: LogDeliveryWrite
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256
      LoggingConfiguration:
        LogFilePrefix: access-logs
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      VersioningConfiguration:
        Status: Enabled

Error Log

No errors. But, versioning on logging buckets isn't useful, as objects are written once. Versioning without a minimal lifecycle policy run the risk of increasing storage (and cost) without bounds.

Environment

  • CDK CLI Version : 1.56.0
  • CDK Framework Version: 1.56.0
  • AWS Solutions Constructs Version : 1.56.0
  • OS : OS/X
  • Language : Typescript

Other


This is ๐Ÿ› Bug Report

New Pattern: aws-apigateway-kinesisstream

This new pattern is similar to the existing aws-apigateway-sqs, but the service integration is with Kinesis Data Streams.

Use Case

API Gateway provides a layer of abstraction from the streaming storage (Kinesis Data Streams). This layer of abstraction enables custom authentication approaches and control quotas for specific data producers.

Proposed Solution

The new pattern will use this documentation page as a reference: https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-kinesis.html

Other

  • ๐Ÿ‘‹ I may be able to implement this feature request

This is a ๐Ÿš€ Feature Request

HTTPS Support for docs.awssolutionsbuilder.com

Please support HTTPS / TLS on the documentation site: https://docs.awssolutionsbuilder.com

Use Case

HTTPS all the things, please :)

https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf#page=39

Proposed Solution

The site appears to be a public website-enabled S3 bucket:
https://s3.us-east-1.amazonaws.com/docs.awssolutionsbuilder.com/aws-solutions-konstruk/latest/index.html

~ $ dig +noall +answer docs.awssolutionsbuilder.com.
docs.awssolutionsbuilder.com. 4	IN	A	52.216.78.43
~ $ dig +noall +answer -x 52.216.25.251
251.25.216.52.in-addr.arpa. 894	IN	PTR	s3-website-us-east-1.amazonaws.com.

Perhaps a CloudFront Distribution + ACM certificate?

Maybe dogfood the @aws-solutions-konstruk/aws-cloudfront-s3 package? That has the benefit of also apparently using some Security related HTTP Headers with a Lambda@Edge.

Other

The documentation site gets an F from Mozilla Observatory :( https://observatory.mozilla.org/analyze/docs.awssolutionsbuilder.com

  • ๐Ÿ‘‹ I may be able to implement this feature request (if the docs site is open source?)
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

New Pattern: aws-lambda-cr-apigateway

Add your +1 ๐Ÿ‘ to help us prioritize

Overview:

This AWS Solutions Construct implements an Inline AWS Lambda function and custom resource to invoke an API endpoint and post user provided metrics data

User provided props for the construct:

  • URL for API endpoint
  • Metrics data as ResourceProperties JSON blob
  • Optional flag to append UUID as unique identifier for every request (default: true)
  • Optional lambda.Function an existing instances of Lambda Function (default: None)
  • Optional lambda.FunctionProps to deploy new Lambda Function (default: Default props are used)

Default settings

Out of the box implementation of the Construct without any override will set the following defaults:

AWS Lambda Function

  • Configure least privilege access IAM role for Lambda function
  • Enable reusing connections with Keep-Alive for NodeJs Lambda function

AWS CloudFormation CustomResources

  • Generate the unique identifier (UUID) per request
  • Post the user provided ResourceProperties (metrics data) + UUID to the API endpoint

New Pattern: Consolidated Logging Bucket

The AWS Solutions Construct creates an encrypted S3 Bucket for access logging if it does not already exist. Bucket is automatically set to that no public access is enabled, encrypted, and allows log service write access.

User provided props for the construct

  • Bucket name for the logging bucket
  • Bucket for which access logging is to be enabled
  • KMS key or SSE encryption

Use Case

When deploying an s3 bucket it is a best practice to log access data data. This can result in S3 bucket sprawl, as each application creates another logging bucket for s3 buckets used in the app. This construct allows use/reuse of a log bucket for multiple s3 buckets' access logs by creating it when needed, and reusing it when it already exists. Best practices for security of the bucket help ensure the security and confidentiality of the data by restricting public access and using encryption. The bucket name of the bucket whose access is being logged is used as the prefix in the log bucket.

Proposed Solution

Create the S3 log bucket if it doesn't exist
Apply the necessary ACLs to prevent public access
Apply the necessary Grants to allow log service write access
Configure access logging for the bucket to be logged to a prefix in the logging bucket

Ex. app_bucket needs access logging. log_bucket is created. Access to app_bucket is logged to s3://log_bucket/app_bucket

Other

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

New pattern: aws-iot-lambda-secretsmanager

For IOT devices to read from secrets manager

Use Case

Proposed Solution

Other

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

Upgrade deprecated CDK property used by API Gateway patterns

I was using aws-cloudfront-apigateway-lambda pattern, and I found the default authorization type for API Gateway is IAM. I needed to change that one to NONE, but I saw that the pattern is using deprecated CDK property options.
https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-apigateway.LambdaRestApi.html

Use Case

options property is still available, but since it's deprecated property, it would be better to replace options to other live properties.

Proposed Solution

Since every options properties are available in LambdaRestApiProps, it would be easy to use LambdaRestApiProps itself instead of options.
https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-apigateway.LambdaRestApiProps.html

Other

These are source code where options are used.

export function DefaultGlobalLambdaRestApiProps(_existingLambdaObj: lambda.Function, _logGroup: LogGroup) {
const defaultGatewayProps: api.LambdaRestApiProps = {
handler: _existingLambdaObj,
options: DefaultRestApiProps([api.EndpointType.EDGE], _logGroup)
};
return defaultGatewayProps;
}

export function DefaultRegionalLambdaRestApiProps(_existingLambdaObj: lambda.Function, _logGroup: LogGroup) {
const defaultGatewayProps: api.LambdaRestApiProps = {
handler: _existingLambdaObj,
options: DefaultRestApiProps([api.EndpointType.REGIONAL], _logGroup)
};
return defaultGatewayProps;
}

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

Dynamo constructs don't offer provided table.

When working with the LambdaToDynamoDB construct, I have the option of providing a Lambda function. This is how I can integrate it with an ApiGatewayToLambda construct. However, constructs like LambdaToDynamoDB and DynamoDBStreamToLambda don't allow me to provide an existing Table. Which means I can't use them together. I can wire:

ApiGatewayToLambda->LambdaToDynamoDB

but can't wire

LambdaToDynamoDB -> DynamoDBStreamToLambda

or

LambdaToDynamoDB -> DynamoDBStreamToLambdaToElasticSearchAndKibana

as these constructs insist on creating a new table.

Use Case

I'd like to be able to chain together a number of constructs from this library. Right now I can't seem to do that.

Proposed Solution

Continue the pattern seen with the lambda Function and the LambdaToDynamoDB construct, something like:

export interface DynamoDBStreamToLambdaProps {
  
   readonly deployTable: boolean;
   readonly existingTableObj?: dynamodb.Table,

}

Other

I'm curious why the boolean needs to be provided? Could a truthy check on 'existingXObject' be enough?

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

BYO Kinesis Stream to aws-kinesisstreams-lambda

Similar to how an existingLambdaObj? can be supplied for select patterns to bring-your-own Lambda function, I'm seeking the ability to bring-your-own Kinesis Stream to this pattern.

Use Case

This feature would be useful if the user already has a data stream table in their account that they would like to retroactively connect to a Lambda function.

Proposed Solution

Add an existingStreamObj? pattern prop, similar to aws-apigateway-kinesisstreams.

Question: Would it also make sense to update the eventSourceProps property to be of type KinesisEventSourceProps (instead of EventSourceMappingOptions)? The other patterns for Lambda (such as aws-dynamodb-stream-lambda and aws-s3-lambda) are already using the types from aws-lambda-event-sources.

Other

  • ๐Ÿ‘‹ I may be able to implement this feature request
  • โš ๏ธ This feature might incur a breaking change

This is a ๐Ÿš€ Feature Request

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.