Code Monkey home page Code Monkey logo

paas-bootstrap's Introduction

PaaS Bootsrap

We use the code in this repository to bootstrap our AWS PaaS environment. The normal flow is:

  1. Create the VPC under which the PaaS systems will live
  2. Create a Concourse using bosh create-env
  3. Deploy the pipelines that will spin up BOSH and CloudFoundry
  4. Sit back and wait

Pre-requisites

  • AWS CLI
  • Terraform CLI
  • Fly CLI
  • yq (or, brew install yq)
  • jq (or, brew install jq)

Creating a new environment

You'll need to create a <env>_vpc.tfvars file with az1, az2, region and parent_dns_zone:

Note: Multiple AZs are required in order to deploy an AWS ALB.

set ingress_whitelist to the CIDRs that may access Concourse

{
"az1": "eu-west-1a",
"az2": "eu-west-1b",
"region": "eu-west-1",
"parent_dns_zone": "<domain>",
"ingress_whitelist": ["0.0.0.0/0"],
"slack_webhook_uri": "https://hooks.slack.com/services/<generated uri>"
}

Example command:

git submodule update --init
ENVIRONMENT=<choose_a_name> AWS_ACCESS_KEY_ID=<your_key_id> AWS_SECRET_ACCESS_KEY=<your_secret_key>
make concourse

Where:

  • ENVIRONMENT - a name for your environment
  • AWS_ACCESS_KEY_ID - your aws access key id
  • AWS_SECRET_ACCESS_KEY - your aws secret access key

You can specify AWS_PROFILE, rather than the two AWS secrets, for every step except for make concourse. The bosh create-env command currently does not handle AWS_PROFILE correctly.

Connecting to Concourse

The dns name of Concourse is found by:

terraform output -state=<env>_concourse.tfstate.json concourse_fqdn

Go to https://<concourse_fqdn> to login.

The username is admin and you can get the password through:

bin/concourse_password.sh -e <env>

or using

make concourse_password ENVIRONMENT=<env>

Testing that Concourse works

ENVIRONMENT=<env> AWS_ACCESS_KEY_ID=<your_key_id> AWS_SECRET_ACCESS_KEY=<your_secret_key> make test_pipeline
fly -t <env> trigger-job -j test/pipeline-test -w

Installing the deployment pipeline

The deploy_pipeline pipeline will spin up the jump box and BOSH director.

ENVIRONMENT=<env> AWS_ACCESS_KEY_ID=<your_key_id> AWS_SECRET_ACCESS_KEY=<your_secret_key> make deploy_pipeline
fly -t <env> trigger-job -j deploy_pipeline/terraform-jumpbox -w

If you are deploying from a branch, you should also specify it with the BRANCH environment variable, so that the pipeline will trigger correctly.

BRANCH=<your git branch> ... make deploy_pipeline

Logging in to BOSH

Once the deployment pipeline has run to completion, you can set up your connection to BOSH easily using:

  bin/bosh_credentials.sh -e <env> -f
  # spins up a subshell with a Socks5 proxy connection via jump box to BOSH

or

  source bin/bosh_credentials.sh -e <env>
  # sets up the Socks5 proxy connection as above, but it's now your job to kill it
  # it also sets BOSH_CLIENT, BOSH_CLIENT_SECRET environment variables

LICENCE

Copyright (c) 2018 Crown Copyright (Office for National Statistics)

Released under MIT license, see LICENSE for details.

paas-bootstrap's People

Contributors

jblackman avatar jchapman68 avatar necrophonic avatar rhigriff avatar robertgruber avatar srbry avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar

paas-bootstrap's Issues

Issue creating S3 buckets on fresh build

When initially running a fresh pipeline, the terraform-cf job initially will fail complaining about Error putting S3 versioning: AccessDenied: Access Denied

This seems to be a general terraform timing issue when the bucket is created and encryption is applied.

Workaround

Re-running the job step succeeds (as buckets now exist so merely need to be amended)

Route53 record "already exists" error when fresh bootstrapping

When running make concourse (on environment that previously existed but has been torn down), gets the following error:

Error: Error refreshing state: 1 error(s) occurred:

* data.aws_route53_zone.child_zone: 1 error(s) occurred:

* data.aws_route53_zone.child_zone: data.aws_route53_zone.child_zone: multiple Route53Zone found please use vpc_id option to filter

Possibly due to Route53 records not being fully torn down by destruction

BOSH isn't using IAM profile for blobstore access

In order to use s3 blobstore, Bosh uses AWS credentials to do this instead of IAM profile.
These credentials get stored on every VM managed by Bosh, which is very undesirable.
This branch MUST NOT be used until this gets fixed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.