Code Monkey home page Code Monkey logo

macpro-quickstart-serverless's People

Stargazers

Bryce S. avatar  avatar Jerome Lee avatar Daniel Belcher avatar Gavin St. Ours avatar Michael Knoll avatar Zach Holden avatar  avatar Dillon Hafer avatar Jeremy avatar Nick Fn Blum avatar  avatar Jimmy Stridh avatar David Gage avatar David Wells avatar Michael Blevins avatar Jesse Alton avatar

Watchers

James Cloos avatar Berry Davenport avatar vidit-majmudar-cms avatar Benjamin Paige avatar Yonas R avatar Mazdak Atighi avatar Seth Sacher avatar Will Hunter avatar  avatar Nick Fn Blum avatar Zach Lewis avatar

macpro-quickstart-serverless's Issues

Serverless no output timeout should be increased

Elasticsearch can occasionally take >30minutes to be created. The cause of this isn't known and isn't being pursued.

The 'no output' timeout for serverless is set to 30 right now. Bumping it to 60 would cover all of the long build time scenarios seen so far.

AC:

  • The no output timeout has been increased from 30minutes to 60minutes

Document ENV variables for build

There are several variables needed to deploy to AWS. Some are optional, some are not.

This should all be documented in the README and the config.yml

AC:

  • The environment variables used during deployment are documented in the README and the .circleci/config.yml

Lock script race condition

The lock script occasionally fails to work as intended.
This is what is believed to be happening

  • the lock script says "are there any builds with lower build numbers currently running on this branch"?
  • the lock script waits until all lower build numbers (earlier builds) are no longer running
  • when an earlier build jumps from the 'deploy' stage to the 'test' stage, there is a moment where that build has no running jobs.
  • if the lock script polls the api for running jobs with lower build numbers in this window, the locked build will unlock and deploy/test.

The impact is minimal at the moment, since the effect is running tests against an environment that has just begun deploying. The tests would finish before deployment reallys tarts, and the tests are just smoke tests anyway.

A fix should be introduced though. Maybe the 'check the api for no earlier jobs' has to pass twice in a row? This might be difficult to test.

AC:

  • The bad behavior of the lock script is mitigated.

Git Actions deployment uses set-env, which is deprecated

Eventually that's gonna bite... so at least we'll capture it here? While I was PR-ing the branch to move to Actions, I poked around to see if it was easy enough for me to "just update". and..... when less is at stake, I will update and test, though I am happy to let you someone else do this, too :)

The set-env command is deprecated and will be disabled soon:
A moderate security vulnerability has been identified in the GitHub Actions runner that can allow environment variable and path injection in workflows that log untrusted data to stdout. This can result in environment variables being introduced or modified without the intention of the workflow author. To address this issue we have introduced a new set of files to manage environment and path updates in workflows.

Here's what I found as a solution:
Setting an environment variable
echo "{name}={value}" >> $GITHUB_ENV

Creates or updates an environment variable for any actions running next in a job. The action that creates or updates the environment variable does not have access to the new value, but all subsequent actions in a job will have access. Environment variables are case-sensitive and you can include punctuation.

Example
echo "action_state=yellow" >> $GITHUB_ENV
Running $action_state in a future step will now return yellow

Multline strings
For multiline strings, you may use a delimiter with the following syntax.

{name}<<{delimiter}
{value}
{delimiter}
Example
In this example, we use EOF as a delimiter and set the JSON_RESPONSE environment variable to the value of the curl response.

steps:

  • name: Set the value
    id: step_one
    run: |
    echo 'JSON_RESPONSE<<EOF' >> $GITHUB_ENV
    curl https://httpbin.org/json >> $GITHUB_ENV
    echo 'EOF' >> $GITHUB_ENV
    Adding a system path
    echo "{path}" >> $GITHUB_PATH

Prepends a directory to the system PATH variable for all subsequent actions in the current job. The currently running action cannot access the new path variable.

Example
echo "/path/to/dir" >> $GITHUB_PATH

uploads-scan service doesn't work with Greenfield

The role's created by the uploads-scan service don't allow for iam_path or iam_permissions_boundary to be set.

These are typically not needed, but in Greenfield accounts, we need to specify those. This is a pattern we have in other services.

Note: This quickstart isn't deploying in a greenfield account, so this issue went expectedly unnoticed.

AC:

  • iam permissions boudary and path are able to be set for the uploads scan service

User uploaded objects are scanned for PII

Adding PII scanning to the uploaded objects has been mentioned and would be a great addition to the quickstart.
This is targeted as a use case for AWS Macie, but this issue won't prescribed the technology.
It also won't prescribe the exact actions to take upon finding potential PII.
What's important for this quickstart is that the plumbing is there, and there's an existing framework to scan for PII, trigger an event on findings, and take action. The details of those actions aren't super important. Maybe send an email, tag the object, or delete it.
Do what you think is best. Keep it flexible.

AC:

  • user uploads are scanned for Personally Identifiable Information (PII) in near real time (read: event driven)
  • findings of the PII scan trigger an event of some kind where action is taken (email submitter, delete, )

Allow for global IAM path parameter

This stems from a CMS specific requirement.
In CMS Greenfield accounts, admins can only create IAM objects under a "/some/iam/path" IAM path.
The current codebase can't support this, as the default "/" path is implied everywhere.
The quickstart should be updated to optionally take a global iam path; that's the idea.

AC:

  • A quickstart user can specify an IAM path in which all IAM objects for the project will be created.

Configurable AWS WAF to restrict access

A WAF can be put in front of cloudfront to restrict access by ip range. This can be used to essentially allow only vpn traffic to reach the site. It would be nice if the quickstart had this pattern laid out, ideally as a configurable option. Restricting access to the site to a vpn is a common ask.

AC:

  • The quickstart includes the optional creation and configuration of WAF in front of cloudfront.

Introduce a single serverless deployment bucket

By default, serverless creates a bucket for its use for each serivce. This results in a lot of buckets.

It seems preferable to create a single bucket for all services use.

AC:

  • A single bucket for serverless deployments is shared by all services in a repo.

Add profile page with account update ability

A profile or settings page should be added. A user should be able to update or add any mutable user attributes, like name and phone number

AC:

  • Logged in users can go to a profile/settings page to view/update their user information.

Implement caching for node modules

CircleCI supports caching of dependencies. This should be used for node modules in the name of speed. The .serverless directories should be investigated for caching too, but that'll be another issue.

AC:

  • The node_modules installed by the deployment is cached as reused as much as possible.

Add ES index and mapping creation

Creating ES indices and configuring mappings is understood to be a common workflow for systems using ES.
Adding a pattern for creating an index and configuring mappings would be a solid addition to the quickstart.
It's not totally clear where this config should ideally live.
At the moment, a separate es config service that does configuration via lambda makes sense.

AC:

  • The quickstart creates an index and configures mappings out of the box, suggesting a pattern for extension and development.

So Many Buckets!?

Seven buckets per deployment kind of seems like a lot?
The following might be useful --

If you have defined multiple buckets, you can limit your deployment to a single bucket with the --bucket option:
$ sls s3deploy --bucket my-bucket

The ultimate goal is not running into that account bucket limit... it will help a lot to get moving to a CI where the destroy is automatically happening, but even with decent cleanup, it only takes 14 branches to blow the limit... I count 6 people already making branches, with at least 3 more onboarding this quarter...

Let's please optimize the bucket usage and cleanup :)

'Profile' title in navbar does not display on login

Type of Issue:

  • Bug: Something isn't working

Issue Creator Checklist

  • This issue has been thoroughly documented below; a developer should be able to understand the issue by reading it.

What's broken?

Upon first logging into the site, the Profile/logout drop down does not display properly.
The clickable navitem is intended to display the logged in user's email. But, when logging in, the element is rendered before the user is authenticated. So, it appears blank.

Screenshot of home page after first logging in (note the top right drop down menu with no label, just an arrow):
Screen Shot 2021-04-12 at 12 38 14 PM

Screenshot of home page after refreshing:
Screen Shot 2021-04-12 at 12 38 19 PM

Two options:

  • Set the navitem's title (email) after authenticating
  • Display a static 'My Profile' or something similar to make things less complicated.

What's the impact of this bug?

When first logging into the site, users are shown a blank/confusing drop down for their profile and settings. While there is still a dropdown arrow, it's not totally obvious that there is a menu available or what it does.

This is considered a low impact bug, as the menu label only behaves incorrectly once, upon first logging in.

Steps to Reproduce?

Login to the app. Use Cognito/Private mode if there's a possibility you have a live session.
The top right menu will be blank.
Refresh the page and your email will appear.

Assorted Notes/Considerations

Screenshots above.

AC:

  • The profile/logout dropdown's navitem title behaves as intended, however that intention is defined.

Add WAF logging

The WAF logs for Cloudfront and APIGW should be piped to S3 for storage and analysis. This should allow for insights into what requests are getting denied, what ips are hammering the sight... take your pick.

AC:

  • The WAF logs from Cloudfront and APIGW are collected and stored in S3.

Rename the deploy workflow

This is simple but an issue is good for tracking purposes.

The deploy workflow for github actions is currently named config.yml
That should be renamed to deploy.yml
Likewise, the workflow itself should be renamed from Build to Deploy

This could temporarily break the github lock script if the right order of events happened, so I want to include this as part of the other breaking changes in release 1.x

AC:

  • Github actions workflow and file is renamed as approrpriate.

CloudTamer (CTKEY) support should be removed

The quickstart curently support specifying AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY or specifying CTKEY_USER/CTKEY_PASSWORD

CTKEY is a tool that lets you trade a fixed CloudTamer username/password for temporary Amazon credentials.

The authors' use case for using CloudTamer programmatically (CTKEY) is going away, and the support for it should too.
It complicates masking sensitive variables, and the workflow was never broadly applicable.

AC:

  • CloudTamer CTKEY support is removed.

Add pull request builds in circle ci

pull request builds should be required on every pull request.
Right now, if you push a branch cci-, it will be built. If you make a pull request for one of these branches, it will be required to pass its status check (cci build).
But branches not matching cci-
have no build.
This was seen as a good thing, since we can get code in quickly without a build (like readme updates). Also, the build process for a new env takes significant time (30 minutes or so) because of elastic search.
However, we could have a designated PR environment... each pr kicks off a deployment of that environment, and locks it so only one PR at a time. So a persistent PR environment. This takes care of the speed issue for new environments... the env isn't new. There will be times when changing a good deal of infrastructure that this becomes problematic and slow. I think we take those as they come. Most changes will be to the UI code or the fast deploying lambdas.

AC:

  • Every pull request triggers a CircleCI build and is required to pass before merging.

Uppercase stage names break cognito with custom domain name

This isn't a problem yet, but has been seen when implementing Okta which uses a cognito custom domain. Can/should be fixed ahead of time. found by @sethsacher

An error occurred: UserPoolDomain - The domain name contains an invalid character. Domain names can only contain lower-case letters, numbers, and hyphens. Please enter a different name that follows this format: ^[a-z0-9](?:[a-z0-9\-]{0,61}[a-z0-9])?$ (Service: AWSCognitoIdentityProviderService; Status Code: 400; Error Code: InvalidParameterException; Request ID: 013e0b42-cac6-4420-ab34-27720db86024; Proxy: null).

I recommend adding to the branch verification in github actions and circle ci to fail if theres an uppercase.

AC:

  • The error above is avoided.

Add Virus/Malware scanning to user uploaded S3 objects

User uploaded S3 objects should be scanned for virus/malware/whatev.
This should probably be driven by an S3 event triggering a lambda based scan. Tool TBD. ClamAV is the only tool that's been mentioned by name
Perhaps a user should only be allowed to download a file if it's been scanned.
Perhaps the file should be removed if it's found to be a risk.
Perhaps the admin team and/or the user should be notified if it was deemed a risk.
There's wiggle room here to pick what's best.

AC:

  • User uploaded files are scanned for malware/virus/risk in near real time (read: event driven).
  • The findings of the scan trigger events/logic to act upon them... emailing uploader, removing obejct, taggin as safe, etc.

Add support for multiple contact emails

Right now the user's email is attached to the form, and it is notified on create/update/delete.
There should be a space to add additional emails that should be notified.

AC:

  • Additional emails that should be notified on events can be added to the form.

Replace dynamo-to-elasticsearch-nodejs dependency

The elasticsearchIndexer function currently uses dynamo-to-elasticsearch-nodejs. It's a publicly available package specifically made for just that... flowing data from a dynamodb stream to elasticsearch.

This should be replaced for two reasons:

  • Making a signed http request to send data to ES would remove the need for this dependency. That would reduce the size of the deployed function and reduce dependencies.
  • The package has a GPL-3.0-only license flagged by Fossa as a potential issue. It'd be wise to swerve this altogether since this package isn't necessary

AC:

  • The project no longer depends on dynamo-to-elasticsearch-nodejs

Make it fast

Make the deployment fast. Caching, package.json consolidation, conditional npm installs, npm ci.
Probably skip the "only deploy whats changed since last successful"... serverless only updates stacks if it needs to, so if the total deployment is fast enough, we wont need that logic (i dont want that logic either, its a mess)

AC:

  • Deployment is sped up

Warmup plugin node_module caching issue

This is observered on branches where INFRASTRUCTURE_TYPE=production is set (currently only master).

Serverless: WarmUp: app-api-master-delete Serverless: Bundling with Webpack... ⚠ 「copy-webpack-plugin」: unable to locate '_warmup' at '/home/runner/work/macpro-quickstart-serverless/macpro-quickstart-serverless/services/app-api/node_modules/serverless-bundle/src/_warmup' ⚠ 「copy-webpack-plugin」: unable to locate '_warmup' at '/home/runner/work/macpro-quickstart-serverless/macpro-quickstart-serverless/services/app-api/node_modules/serverless-bundle/src/_warmup' ⚠ 「copy-webpack-plugin」: unable to locate '_warmup' at '/home/runner/work/macpro-quickstart-serverless/macpro-quickstart-serverless/services/app-api/node_modules/serverless-bundle/src/_warmup' ⚠ 「copy-webpack-plugin」: unable to locate '_warmup' at '/home/runner/work/macpro-quickstart-serverless/macpro-quickstart-serverless/services/app-api/node_modules/serverless-bundle/src/_warmup' ⚠ 「copy-webpack-plugin」: unable to locate '_warmup' at '/home/runner/work/macpro-quickstart-serverless/macpro-quickstart-serverless/services/app-api/node_modules/serverless-bundle/src/_warmup'

The warmup plugin is only used on branches where INFRASTRUCTURE_TYPE is set to production; this is thought to explain why this issue is only seen on master.

This is seen as the main delta between master and dev branches, and could explain the 20-30second speed diff.

AC:

  • The _warmup plugin issue pasted above is no longer observed on INFRASTRUCTURE_TYPE=production branches (master).

Add CTKey to support CloudTamer service users

CloudTamer service users must use CTKey to exchange who they are for temporary AWS CLI access keys. This should be implemented to keep aligned with the original consumers of this repo, unfortunately, but hopefully it can be made optional.
Maybe if CT info is not detected, the CTKey exchange is skipped.

AC:

  • Users with CloudTamer info only can still deploy this project from CircleCI using the CTKey tool.
  • Users without the CloudTamer restriction are not impacted and can still deploy this project from CirlceCI

Refactor Cloudfront cache invalidation logic

Currently, on deployment, the entire ui cloudfront distribution is invalidated. So we upload our static archive to s3, then invalidate the whole cache.

This should be refactored; the suggested way is:

Create a lambda to do individual object invalidations. Set that lambda to trigger on each new or updated file in the s3 bucket via an s3 event. That lambda invalidates that object. So only the objects that need to be invalidated are. And, the pipeline can quit concerning itself with invalidations.

AC:

  • The cache invalidation is handled by S3 events + lambda, making it that only the objects that need invalidation on each deploy are invalidated
  • The current invalidation logic run by the CI system is removed

Bug - First run issue - Scan def updater isn't run

The scan definition updater is run on a schedule.
On deployments of new environments, there is a time where the environment is up and usable, but the scan definition uploader hasn't run yet.
This breaks the app; all scans fail saying 'key doesn't exist'... that's the definitions key.

Although it's a second or two time hit, simply invoking the definition uploader lambda on deploy should fix this. The pattern that's been used is serverless-plugin-scripts hook calling serverless invoke -f at deploy:finalize. Although this specific code was removed later on, you can see the pattern for this here: https://github.com/CMSgov/macpro-quickstart-serverless/pull/36/files

AC:

  • The definition uploader function is guaranteed to have run before new environments are up and expected to be healthy.

Swap TestCafe for Nightwatch.js

This is entirely subjective.

We intend to usenightwatch for browser UI testing.

Consumers of this repo can surely use any testing framework they'd like, but for our purposes, nightwatch will be the standard unless there's a need to divert.

TestCafe and Nightwatch are both simple npm packages, so this should be relatively straightforward to replace.

AC:

  • The existing TestCafe framework is replaced with Nightwatch in the repo.
  • All exiting test functionality is preserved (github actions runs the nightwatch tests and stores results, just as it does with testcafe)

A failed nightwatch test will not upload its results to Github Actions

In the deploy Github Action, there are two tasks related to running the night watch test. First the test itself, and then an action to upload the results of that test to a store for that given run. By default, failed actions halt execution of future tasks so in the case of a failure nothing will be uploaded.

I believe this means that results like screenshots of a test failure will not be recoverable from a failed CI run. We should sort out how to perform that upload as part of the same action or if there is some way to force the task to run regardless we could do that.

STEPS TO REPRODUCE

  • change the text that the current night watch test looks for on the home page so that it doesn't match the expected
  • push a branch to GH

EXPECTED RESULTS

  • a package of the data related to a failed test should be uploaded to the GitHub action for inspection

ACTUAL RESULTS

  • the upload task is never run, no results are uploaded to the Action Run

Add capability to deploy certain branches to specific/different AWS accounts

Right now, all branches/stages get deployed to the same amazon account.
We want the ability to deploy certain branches (stages) to specific amazon accounts.

AC:

  • A user has a way to specify which aws account to deploy to on a per-branch basis; so a user can deploy all dev environments to account XXX, but specify that the prod branch should deploy to account YYY.

Add an end to end test framework

A browser smoke test would be a good addition. The application endpoint should be fetched and a test suite ran.

AC:

  • A smoke test is added to the project that tests the availability of the application endpoint after each deploy.

Add build locking

The CCI builds deploy target environments and will soon run tests.
We want to block concurrent builds.
It seems it's not super straightforward, but there are helper scripts publicly available on github.

AC:

  • Concurrent deploys of a single environment (single branch) are blocked

Add staging/prod support

There should be a suggested pattern for promoting code.
Simple merges from master -> qa/staging -> prod should be simple and effective for now.

What should probably be done is build EVERY branch. Simple as that. And perhaps add a "dont build branches that match nocci-* at the start" so we have a way to get README updates in without building.

AC:

  • There's a pattern for deploying higher environments out of the box.

Simplify SES_SOURCE_EMAIL_ARN and IAM_PERMISSIONS_BOUNDARY_ARN

SES_SOURCE_EMAIL_ARN and IAM_PERMISSIONS_BOUNDARY_ARN are optionao environment variables during deployment.

Each is of a predictable format given the email address / boundary policy and the account id.

Instead of requiring the user to set an account specific arn, he should only be required to suppy an account agnostic name (like email address), and the code should generate the arn based off the known format.

AC:

  • SES_SOURCE_EMAIL_ARN and IAM_PERMISSIONS_BOUNDARY_ARN are replaced with variables that are not necessarily account specific, without impacting existing functionality

Convert the config.yml to work with v2.0

CMS' CircleCI does not support v2.1 out of the box.
The workflow to enable it is unclear; we think it's probably a Jira ticket to somewhere.

For now, the config.yml should be converted to work with 2.0 to keep this targeted to CMS use cases.

AC:

  • The config.yml works on CircleCI systems that support 2.0 but not 2.1.

Remove CircleCI support

Our CircleCI use case is no more, and I'd prefer we not try to maintain support for it.

AC:

  • CircleCI support is removed

Automate stage deletion

Stage deletion should be automated as much as possible.
Since all branches (excluding those that begin with skipci-) are deployed automatically, deletion of those environments should occur when the corresponding branch is deleted.

Manually deployed stages, as in stages deployed outside of the CI system (not common right now), will require a person to run the destroy script... this issue is not for automating the cleanup of those environments, just the envs built by the CI system

AC:

  • stages (environments) built by the CI system from a branch should be destroyed on branch deletion

Move non-sensitive env vars to SSM

Background:
We have a list of deploy time variables to inject. Things like SES_SOURCE_EMAIL_ADDRESS, ROUTE_53_DOMAIN_NAME,
INFRASTRUCTURE_TYPE, etc. These are currently injected as GitHub Secrets.

On projects where the core of this quickstart is being used, that list is growing rapidly. In particular, projects that make use of VPC based resources suddenly have a lot of info to inject: vpc id, private subnet ids, public subnet ids. The current setup dictates all of this information go into GitHub Secrets.

Now take that list and multiply it, since VPC info will vary for some branches, particularly val/prod.

The problem:

  • We have a long and growing list of variables to put into GitHub secrets.
  • Those secrets are not easy to modify, as you cannot see the current value.
  • There's no way to audit who changed the value of a secret (best we can tell).
  • The way in which environment or account specific information is injected to the build is custom and difficult to explain (branchname_VAR_NAME)

Proposed solution:
All non sensitive deploy configuration should be migrated to AWS SSM Parameter Store (this is often referred to as just 'SSM' in conversation and writing, including this issue).

GitHub Actions will need AWS access keys to authenticate to amazon, but once authenticated, it may fetch deploy information from SSM.

Take the example of VPC information... That's account specific information, and it is much more appropriate to keep that information inside the AWS account (SSM) than to give it to github secrets to inject. The build may authenticate to the correct Amazon account, and then pull that VPC info from a standard place in SSM.

Or look at the example of an SES source email address... That's a value that is not bound to an account, but can vary environment to environment, branch to branch. In this case, an SSM naming convention can be used. A default value may be published at /configuration/default/sesSourceEmailAddress but it can be overriden with a branch specific variable at /configuration/mybranchname/sesSourceEmailAddress. The Serverless code may then simple fetch that value as ${ssm:/configuration/${self:custom.stage}/sesSourceEmailAddress, ssm:/configuration/default/sesSourceEmailAddress}. This has the effect of fetching the branch specific var if it exists, and falling back to the default if it doesn't.

In a nutshell, aside from the AWS keys, all of this info can be much better organized, maintained, and fetched using SSM.

  • The list of GitHub secrets is limited to just the AWS access key id and secret access key
  • The values of deploy variables are easy to view and modify in SSM from the AWS Console
  • Modifications to the SSM variables are logged automatically by CloudTrail, and are easy to audit.
  • The way in which we are overriding variables for a specific branch/env is now in the serverless code, and is not reliant on a custom env variable scheme. The serverless code that fetches ssm self documents the override pattern.

Note: This is predicated on authenticating to amazon, so the AWS access key id and secret access key will need to stay in GitHub Secrets. That's been said a few times. What hasn't been addressed is 'what about other sensitive information?'. Currently, we don't have sensitive info to supply outside of the AWS keys. But if there were some, could we store that in SSM too?.... answer: yes, but, the masking provided by injecting true secrets as GitHub Secrets prevents people from printing sensitive information. That masking is not provided out of the box by serverless, as it doesn't know it's a secret. So, this is the reasoning behind billing this issue as 'move all non-sensitive info to SSM'... it is our current understanding that we are better off keeping any future sensitive information as true GitHub Secrets. There may be a way to denote a serverless variable as secret; we haven't looked into it. And finally, we might be better off just not injecting secret strings into the build and avoiding the issue altogether.

AC:

  • All non-sensitive deploy time information is stored and fetched from AWS SSM Parameter Store, instead of GitHub Secrets.

Add more robust branch specific environment variable injector

Right now, some variables are set per branch...
For instance, you can set <branch_name>_CLOUDFRONT_DOMAIN_NAME to set CLOUDFRONT_DOMAIN_NAME for a specific branch and only a specific branch.
For example, master_CLOUDFRONT_DOMAIN_NAME is set in circleci project settings to be aps.cl-demo.com
That works well enough, but there are more variables that developers will want and need to set for specific branches.

A more fleshed out way/pattern to do this is what we want.

AC:

  • There is an established pattern for injectiving variables for specific branches, and not others.
  • The pattern is more broadly applicable and easier to modify than what exists now

Improve stage destroy functionality

We want a better way to destroy stages (branches) built from our project.

The serverless framework using cloudformation under the hood to build stuff in Amazon. So all state required to destroy is in Amazon itself, in a cloudformation stack. So for instance, if we have 10 services in our services folder, we will have 10 cloudformation stacks for any given stage/branch.

It's possible to search for cloudformation stacks matching a pattern, and delete accoridngly. This is what I suggest we do. I've written a script that does this, but it needs some improvement and code changes to make it ready to be a standard workflow.

There's a few gotchas to know about

  • Cloudformation will fail to delete any S3 bucket that is not empty. This is immediately relevant when deleting a stack because every serverless service has a serverless deployment bucket in s3. Some services (like ui) have more than 1 bucket. This is easy enough to handle; the gotcha is knowing you need to do it. In my script i simply found all s3 buckets i wanted/needed to empty and did a rm --recursive on it, removing everything.
  • Identifying the cloudformation stacks you want to destroy.... My script actually takes a single stage name and matches the name against cloudformation stacks... its not great. It works, but it's not bomb proof. What would be much much better is to make code changes to the quickstart and implement a stack tagging convention. So implement a Tag: Name: macpro-quickstart-serverless-service, Value: <branch/stage name>. That tag would go on the stack itself. Cloudformation would propagate that tag to all resoruces it creates for all aws services that support tagging, but we don't care. We do care that this gives the destroy script an easy way to identify all cloudformation stacks for a stage/branch.

For those reading that have dealt with terraform destroys in the past.... this is so much cleaner.

  • Our state is sitting as an object in an amazon service, rather than in an s3 bucket whose configuration is passed in at runtime. So I don't care about what backend was used in building.
  • I don't care what version of the code was deployed to the stacks (so long as the tagging as described above is in place). In fact, I don't need any source code to affect the delete, unlike terraform. Since all of that state is in Cloudformation, Cloudformation knows how to delete what it created.
  • Half built or failed-to-build environments are destroyed just the same. With terraform, a half built env often fails to deploy, and human intervention is needed. Not here.

This can all be run in lambda, but I'd consider that a future enhancement. I think step 1 is to get the tagging in place and the destroy script in source. Where and when to run destroy is something to think about, and maybe something best left to the teams.

AC:

  • The cloudformation stacks for a given stage are programmatically and exclusively identifiable (see CF level tagging above)
  • A destroy script is added to the project. that can accept a stage name and destroy all associated stacks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.