To solve this,
users of this library will always have the origin set to SSH
unless the
environment variable BB_USE_HTTP_ORIGIN
is set.
Otherwise, the HTTP orogin will be used, which offers improved authentication functionality, but will not work for repositories with branch protection.
- Build Java project, build a docker container and push the container to a AWS ECR repo
- Build an (NPM) project, create a tarball and push it to a S3 bucket
- Package a Lambda function, ZIP it and push it to a S3 bucket
The environment can be defined in 2 places:
- The BB pipeline settings (this is the preferred way for secrets)
- With a
export
statement in thebitbucket-pipel;ines.yml
file. This should not be used for secrets
AWS_ACCESS_KEY_ID_ECR_SOURCE
AWS_SECRET_ACCESS_KEY_ECR_SOURCE
AWS_REGION_SOURCE
: Optional, default iseu-central-1
AWS_ACCESS_KEY_ID_ECR_TARGET
AWS_SECRET_ACCESS_KEY_ECR_TARGET
: Must be secretAWS_REGION_TARGET
: Optional, default iseu-central-1
DOCKER_IMAGE
The deploy
step triggers the pipeline of a repository that contains the configuration
for that specific environment. This trigger is done using the BB pipeline REST API.
IMPORTANT: The script sync_trigger_bb_build.bash
requires jq
BB_USER
BB_APP_PASSWORD
: See here to create a BB application passwordREMOTE_REPO_OWNER
REMOTE_REPO_SLUG
Use this function to build a docker artefact image from a source code repository.
The scipt looks for the file Dockerfile
in these locations:
/${BITBUCKET_CLONE_DIR}/Dockerfile
/${BITBUCKET_CLONE_DIR}/docker/Dockerfile
The complete list of environment variables:
AWS_ACCOUNT_ID_TARGET
: Also triesAWS_ECR_ACCOUNTID
ifAWS_ACCESS_KEY_ID_S3_TARGET
is not set.AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
DOCKER_IMAGE
The image will be available as:
${AWS_ACCOUNTID_TARGET}.dkr.ecr.${AWS_REGION_TARGET:-eu-central-1}.amazonaws.com/${DOCKER_IMAGE}:latest
${AWS_ACCOUNTID_TARGET}.dkr.ecr.${AWS_REGION_TARGET:-eu-central-1}.amazonaws.com/${DOCKER_IMAGE}:${BITBUCKET_COMMIT}
${AWS_ACCOUNTID_TARGET}.dkr.ecr.${AWS_REGION_TARGET:-eu-central-1}.amazonaws.com/${DOCKER_IMAGE}:${BITBUCKET_TAG}
if it is a tag triggered build and${RC_PREFIX}
is defined and[[ ${BITBUCKET_TAG} = ${RC_PREFIX}* ]]
The function s3_artifact
runs an optional command BUILD_COMMAND
on the
repository, creates a tarball containing the results and copies the tarball
to a S3 Bucket using AWS credentials defined by AWS_ACCESS_KEY_ID_S3_TARGET
and AWS_SECRET_ACCESS_KEY_S3_TARGET
.
The complete list of environment variables:
AWS_ACCESS_KEY_ID_S3_TARGET
AWS_SECRET_ACCESS_KEY_S3_TARGET
BUILD_COMMAND
: Optional command to build the artifactARTIFACT_NAME
PAYLOAD_LOCATION
S3_ARTIFACT_BUCKET
The artifact will be available as:
s3://${S3_ARTIFACT_BUCKET}/${ARTIFACT_NAME}-last.tgz
s3://${S3_ARTIFACT_BUCKET}/${ARTIFACT_NAME}-${BITBUCKET_COMMIT}.tgz
Artifacts that were created using s3_artifact
can be deployed using s3_deploy
.
The function:
- Downloads an artifact tar file from
S3_ARTIFACT_BUCKET
- Unpacks the tar file in a
workdir
directory - (optional) Replaces
__VARNAME__
placeholders with the value ofCFG_${VARNAME}
in all files inworkdir
- Recursively copies the content of
workdir
toS3_DEST_BUCKET
- (optional) Invalidates the CloudFront distribution if
CLOUDFRONT_DISTRIBUTION_ID
is defined
An overview of all allowed environment variables:
AWS_ACCESS_KEY_ID_S3_SOURCE
: AWS read-only credentials for the bucket that contains the artifact fileAWS_SECRET_ACCESS_KEY_S3_SOURCE
: AWS read-only credentials for the bucket that contains the artifact fileS3_ARTIFACT_BUCKET
: Name of the bucket that contains the artifact fileARTIFACT_NAME
: Basename of the artifact. Will get-last.tgz
suffix (default) or-${TAG}.tgz
suffix (if fileTAG
exists${TAG}
's content is used to set the value of${TAG}
)AWS_ACCESS_KEY_ID_S3_TARGET
: AWS read-write credentials for the bucket that will receive the filesAWS_SECRET_ACCESS_KEY_S3_TARGET
: AWS read-write credentials for the bucket that will receive the filesS3_DEST_BUCKET
: Name of the destination bucketS3_PREFIX
(optional): Prefix to be used for the copy, the default is no prefix (empty string)AWS_ACCESS_CONTROL
(optional): ACL permissions to set on the destination files. Default isprivate
, allowed values can be consulted here.CFG_*
(optional): All variables starting withCFG_
can be used to configure the files in the budld artifact. How this works:- Imagine the variable
CFG_BACKEND_URL
with valuehttps://mybackend.acme.com
- All files under
workdir
(where the tarball is unpacked) will be scanned for the string__BACKEND_URL__
. - Every occurence of
__BACKEND_URL__
will be replaced with the stringhttps://mybackend.acme.com
- Imagine the variable
CLOUDFRONT_DISTRIBUTION_ID
(optional): When this variable is defined, the CloudFront distribution with that name will be invalidated.
Service Repository
- Evironment variable
AWS_CDK_PROJECT
must be set - Environment variable
SERVICE_NAME
must be set, this has to be like `myNewProjectService - The Jira issue must be in your branch name, like
feature/PROJ-1234-my-new-project
AWS CDK Project Repository
- Add the following to your
tsconfig.json
file:
{
"compilerOptions": {
...
"resolveJsonModule": true
},
...
}
- Import the
versions.json
file in yourconfig/default.ts
file:
import {serviceVersions} from "./versions.json"
- Add the
serviceVersions
to theimageTag
of your service:
imageTag: deferConfig(function() {return serviceVersions.serviceName}),
- Creates the variable
jira_issue
with the valuePROJ-1234
, this is based on the branch name - Gets the new version of the service using
maven
and saves it toproject_version
- Clones the
AWS_CDK_PROJECT
repository from BitBucket - Checks if the same branch exists in the
AWS_CDK_PROJECT
repository, if not it will create it and checkout the branch - Changes the version of the service in
config/versions.json
to the new version - Commit and push the changes to the
AWS_CDK_PROJECT
repository
Service Repository
- Evironment variable
AWS_CDK_PROJECT
must be set - The Jira issue must be in your branch name, like
feature/PROJ-1234-my-new-project
AWS CDK Project Repository
- Add the following to your
tsconfig.json
file:
{
"compilerOptions": {
...
"resolveJsonModule": true
},
...
}
- Import the
versions.json
file in yourconfig/default.ts
file:
import {configLabel} from "./versions.json"
- Add the
configLabel
to thespringCloudConfigLabel
of your service:
springCloudConfigLabel: deferConfig(function() {return configLabel}),
- Creates the variable
jira_issue
with the valuePROJ-1234
, this is based on the branch name - Gets the tag linked to the current commit usig
git tag --points-at HEAD
and saves it toconfig_label
- Clones the
AWS_CDK_PROJECT
repository from BitBucket - Checks if the same branch exists in the
AWS_CDK_PROJECT
repository, if not it will create it and checkout the branch - Changes the config label in
config/versions.json
to the new config label - Commit and push the changes to the
AWS_CDK_PROJECT
repository
The scripts are:
bb-aws-utils/build-and-push-docker-image.bash
bb-aws-utils/deploy-docker-image.bash
What this does:
- use the credentials
AWS_ACCESS_KEY_ID_ECR_SOURCE
andAWS_SECRET_ACCESS_KEY_ECR_SOURCE
to login to the source ECR - build a new docker image FROM the image
${AWS_ACCOUNTID_SRC}.dkr.ecr.${AWS_REGION_SOURCE:-eu-central-1}.amazonaws.com/{DOCKER_IMAGE}:${TAG:-latest}
where${TAG}
is the content of the fileTAG
- use the credentials
AWS_ACCESS_KEY_ID_ECR_TARGET
andAWS_SECRET_ACCESS_KEY_ECR_TARGET
to login to the target ECR - Tag the new image and push it to
${AWS_ACCOUNTID_TARGET}.dkr.ecr.${AWS_REGION_TARGET:-eu-central-1}.amazonaws.com/${DOCKER_IMAGE}-${ENVIRONMENT:-dev}
- Disable the alarms that contain the string in the variable
${CW_ALARM_SUBSTR}
(skip this step if the variable is not set) - Run following command to forcibly update (this will pull the latest image from the task's definition) of the service:
aws ecs update-service --cluster ${ECS_CLUSTER} --force-new-deployment --service ${ECS_SERVICE} --region ${AWS_REGION:-eu-central-1}
- And finally wait 120 seconds for the update to finish and enable the CloudWatch alarms (skip this step if the variable
CW_ALARM_SUBSTR
is not set)
image: python:3.6
pipelines:
custom:
build_and_deploy:
- step:
name: Build and push Docker deploy image and start deploy
script:
- git clone https://github.com/rik2803/bb-aws-utils.git
- export AWS_REGION=eu-central-1
- export AWS_ACCOUNTID_SRC=123456789012
- export AWS_ACCOUNTID_TARGET=210987654321
- export ECS_CLUSTER=my-ecs-cluster
- export ECS_SERVICE=my-service
- export ENVIRONMENT=dev
- export DOCKER_IMAGE=my/service-image
- bb-aws-utils/build-and-push-docker-image.bash
- export CW_ALARM_SUBSTR=MyServiceAlarm
- bb-aws-utils/deploy-docker-image.bash
options:
docker: true
AWS_ACCESS_KEY_ID_ECR_SOURCE
andAWS_SECRET_ACCESS_KEY_ECR_SOURCE
: Credentials for the source ECR where the docker image is based upon.AWS_ACCOUNTID_SRC
: AccountID where the source ECR is hostedAWS_ACCESS_KEY_ID_ECR_TARGET
andAWS_SECRET_ACCESS_KEY_ECR_TARGET
: Credentials for the destinatino ECR in the account of the environment where the service is runningAWS_ACCOUNTID_TARGET
: AccountID where the destination ECR is hostedAWS_REGION
: The region (optional, default iseu-central-1
)ECS_CLUSTER
: The name of the cluster where the service to update is runningECS_SERVICE
: The name of the service to updateENVIRONMENT
: The environment (dev
,prd
, ...)DOCKER_IMAGE
: The name of the docker image, without the tagCW_ALARM_SUBSTR
: Determines the CloudWatch alarms that will be paused during deployment of the service
This explains how to build the Lambda function package and publish it to an S3 bucket. Whenever the function is required in an AWS account, it can be downloaded from that bucket.
It's important that the accounts that need to use the Lambda function have read access to the S3 bucket to be able to installthe Lambda function.
The BB pipeline build requires these pipeline environment variables:
LAMBDA_RUNTIME
: One of:python2.7
python3.6
nodejs8.10
LAMBDA_FUNCTION_NAME
: The name to use to store the function in the S3 bucketLAMBDA_PUBLIC
: Also copy the file to${S3_DEST_BUCKET}-public
withpublic-read
permissionsS3_DEST_BUCKET
: The name of the bucket the function should be deployed toAWS_ACCESS_KEY_ID
: Credentials with write access toS3_DEST_BUCKET
AWS_SECRET_ACCESS_KEY
: Credentials with write access toS3_DEST_BUCKET
The result of a succeeded pipeline run is:
- A S3 object named
${LAMBDA_FUNCTION_NAME}.zip
on the bucketS3_DEST_BUCKET
- A S3 object named
${LAMBDA_FUNCTION_NAME}-${BITBUCKET_COMMIT}.zip
on the bucketS3_DEST_BUCKET
It is adviced to use the S3 object that has the commit string in its name, to have a form of version management.
- If libraries need to be installed, create a
requirements.txt
file in the root of your project. The dependencies will be installed by the pipeline. - The Lambda function file should be called
lambda.py
or referenced by the environment variableLAMBDA_FUNCTION_FILE
- If libraries need to be installed, create a
package.json
file in the root of your project. The dependencies will be installed withnpm i
by the pipeline if this file exists. - The Lambda function file should be called
index.js
or referenced by the environment variableLAMBDA_FUNCTION_FILE
As mentioned earlier, the evnironment can be set in bitbucket-pipelines.yml
or in the BB Pipeline settings. To easily set configure a repository for BB Pipelines, checkout this GitHub repository.
image: node:8
pipelines:
custom:
build_lambda_function_package_and_publish_to_s3:
- step:
name: Build the lambda function package publish to S3
caches:
- node
script:
- git clone https://github.com/rik2803/bb-docker-aws-utils.git
- source bb-docker-aws-utils/lib.bash
- export S3_DEST_BUCKET=ixortooling-prd-s3-lambda-function-store
- export LAMBDA_FUNCTION_NAME=sns-to-google-chat
- s3_lambda_build_and_push
git submodule init
git submodule add https://github.com/sstephenson/bats test/libs/bats
git submodule add https://github.com/ztombol/bats-assert test/libs/bats-assert
git submodule add https://github.com/ztombol/bats-support test/libs/bats-support
git add .
git commit -m 'installed bats'
./test/libs/bats/bin/bats test/*bats
Submodules are not automatically cloned inside a BB pipeline. To clone the submodules, use this command:
git submodule update --init --recursive