Code Monkey home page Code Monkey logo

ecs-deploy's Introduction

ECS Deploy

image

image

ecs-deploy simplifies deployments on Amazon ECS by providing a convenience CLI tool for complex actions, which are executed pretty often.

Key Features

  • support for complex task definitions (e.g. multiple containers & task role)
  • easily redeploy the current task definition (including docker pull of eventually updated images)
  • deploy new versions/tags or all containers or just a single container in your task definition
  • scale up or down by adjusting the desired count of running tasks
  • add or adjust containers environment variables
  • run one-off tasks from the CLI
  • automatically monitor deployments in New Relic

TL;DR

Deploy a new version of your service:

$ ecs deploy my-cluster my-service --tag 1.2.3

Redeploy the current version of a service:

$ ecs deploy my-cluster my-service

Scale up or down a service:

$ ecs scale my-cluster my-service 4

Updating a cron job:

$ ecs cron my-cluster my-task my-rule

Update a task definition (without running or deploying):

$ ecs update my-task

Installation

The project is available on PyPI. Simply run:

$ pip install ecs-deploy

Run via Docker

Instead of installing ecs-deploy locally, which requires a Python environment, you can run ecs-deploy via Docker. All versions starting from 1.7.1 are available on Docker Hub: https://cloud.docker.com/repository/docker/fabfuel/ecs-deploy

Running ecs-deploy via Docker is easy as:

docker run fabfuel/ecs-deploy:1.10.2

In this example, the stable version 1.10.2 is executed. Alternatively you can use Docker tags master or latest for the latest stable version or Docker tag develop for the newest development version of ecs-deploy.

Please be aware, that when running ecs-deploy via Docker, the configuration - as described below - does not apply. You have to provide credentials and the AWS region via the command as attributes or environment variables:

docker run fabfuel/ecs-deploy:1.10.2 ecs deploy my-cluster my-service --region eu-central-1 --access-key-id ABC --secret-access-key ABC

Configuration

As ecs-deploy is based on boto3 (the official AWS Python library), there are several ways to configure and store the authentication credentials. Please read the boto3 documentation for more details (http://boto3.readthedocs.org/en/latest/guide/configuration.html#configuration). The simplest way is by running:

$ aws configure

Alternatively you can pass the AWS credentials (via --access-key-id and --secret-access-key) or the AWS configuration profile (via --profile) as options when you run ecs.

AWS IAM

If you are using ecs-deploy with a role or user account that does not have full AWS access, such as in a deploy script, you will need to use or create an IAM policy with the correct set of permissions in order for your deploys to succeed. One option is to use the pre-specified AmazonECS_FullAccess (https://docs.aws.amazon.com/AmazonECS/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonECS_FullAccess) policy. If you would prefer to create a role with a more minimal set of permissions, the following are required:

  • ecs:ListServices
  • ecs:UpdateService
  • ecs:ListTasks
  • ecs:RegisterTaskDefinition
  • ecs:DescribeServices
  • ecs:DescribeTasks
  • ecs:ListTaskDefinitions
  • ecs:DescribeTaskDefinition
  • ecs:DeregisterTaskDefinition

If using custom IAM permissions, you will also need to set the iam:PassRole policy for each IAM role. See here https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html for more information.

Note that not every permission is required for every action you can take in ecs-deploy. You may be able to adjust permissions based on your specific needs.

Actions

Currently the following actions are supported:

deploy

Redeploy a service either without any modifications or with a new image, environment variable, docker label, and/or command definition.

scale

Scale a service up or down and change the number of running tasks.

run

Run a one-off task based on an existing task-definition and optionally override command, environment variables and/or docker labels.

update

Update a task definition by creating a new revision to set a new image, environment variable, docker label, and/or command definition, etc.

cron (scheduled task)

Update a task definition and update a events rule (scheduled task) to use the new task definition.

Usage

For detailed information about the available actions, arguments and options, run:

$ ecs deploy --help
$ ecs scale --help
$ ecs run --help

Examples

All examples assume, that authentication has already been configured.

Deployment

Simple Redeploy

To redeploy a service without any modifications, but pulling the most recent image versions, run the following command. This will duplicate the current task definition and cause the service to redeploy all running tasks.:

$ ecs deploy my-cluster my-service

Deploy a new tag

To change the tag for all images in all containers in the task definition, run the following command:

$ ecs deploy my-cluster my-service -t 1.2.3

Deploy a new image

To change the image of a specific container, run the following command:

$ ecs deploy my-cluster my-service --image webserver nginx:1.11.8

This will modify the webserver container only and change its image to "nginx:1.11.8".

Deploy several new images

The -i or --image option can also be passed several times:

$ ecs deploy my-cluster my-service -i webserver nginx:1.9 -i application my-app:1.2.3

This will change the webserver's container image to "nginx:1.9" and the application's image to "my-app:1.2.3".

Deploy a custom task definition

To deploy any task definition (independent of which is currently used in the service), you can use the --task parameter. The value can be:

A fully qualified task ARN:

$ ecs deploy my-cluster my-service --task arn:aws:ecs:eu-central-1:123456789012:task-definition/my-task:20

A task family name with revision:

$ ecs deploy my-cluster my-service --task my-task:20

Or just a task family name. It this case, the most recent revision is used:

$ ecs deploy my-cluster my-service --task my-task

Important

ecs will still create a new task definition, which then is used in the service. This is done, to retain consistent behaviour and to ensure the ECS agent e.g. pulls all images. But the newly created task definition will be based on the given task, not the currently used task.

Set an environment variable

To add a new or adjust an existing environment variable of a specific container, run the following command:

$ ecs deploy my-cluster my-service -e webserver SOME_VARIABLE SOME_VALUE

This will modify the webserver container definition and add or overwrite the environment variable SOME_VARIABLE with the value "SOME_VALUE". This way you can add new or adjust already existing environment variables.

Adjust multiple environment variables

You can add or change multiple environment variables at once, by adding the -e (or --env) options several times:

$ ecs deploy my-cluster my-service -e webserver SOME_VARIABLE SOME_VALUE -e webserver OTHER_VARIABLE OTHER_VALUE -e app APP_VARIABLE APP_VALUE

This will modify the definition of two containers. The webserver's environment variable SOME_VARIABLE will be set to "SOME_VALUE" and the variable OTHER_VARIABLE to "OTHER_VALUE". The app's environment variable APP_VARIABLE will be set to "APP_VALUE".

Set environment variables exclusively, remove all other pre-existing environment variables

To reset all existing environment variables of a task definition, use the flag --exclusive-env :

$ ecs deploy my-cluster my-service -e webserver SOME_VARIABLE SOME_VALUE --exclusive-env

This will remove all other existing environment variables of all containers of the task definition, except for the variable SOME_VARIABLE with the value "SOME_VALUE" in the webserver container.

Set a secret environment variable from the AWS Parameter Store

Important

This option was introduced by AWS in ECS Agent v1.22.0. Make sure your ECS agent version is >= 1.22.0 or else your task will not deploy.

To add a new or adjust an existing secret of a specific container, run the following command:

$ ecs deploy my-cluster my-service -s webserver SOME_SECRET KEY_OF_SECRET_IN_PARAMETER_STORE

You can also specify the full arn of the parameter:

$ ecs deploy my-cluster my-service -s webserver SOME_SECRET arn:aws:ssm:<aws region>:<aws account id>:parameter/KEY_OF_SECRET_IN_PARAMETER_STORE

This will modify the webserver container definition and add or overwrite the environment variable SOME_SECRET with the value of the KEY_OF_SECRET_IN_PARAMETER_STORE in the AWS Parameter Store of the AWS Systems Manager.

Set secrets exclusively, remove all other pre-existing secret environment variables

To reset all existing secrets (secret environment variables) of a task definition, use the flag --exclusive-secrets :

$ ecs deploy my-cluster my-service -s webserver NEW_SECRET KEY_OF_SECRET_IN_PARAMETER_STORE --exclusive-secret

This will remove all other existing secret environment variables of all containers of the task definition, except for the new secret variable NEW_SECRET with the value coming from the AWS Parameter Store with the name "KEY_OF_SECRET_IN_PARAMETER_STORE" in the webserver container.

Set environment via .env files

Instead of setting environment variables separately, you can pass a .env file per container to set the whole environment at once. You can either point to a local file or a file stored on S3, via:

$ ecs deploy my-cluster my-service --env-file my-app env/my-app.env

$ ecs deploy my-cluster my-service --s3-env-file my-app arn:aws:s3:::my-ecs-environment/my-app.env

Set secrets via .env files

Instead of setting secrets separately, you can pass a .env file per container to set all secrets at once.

This will expect an env file format, but any values will be set as the valueFrom parameter in the secrets config. This value can be either the path or the full ARN of a secret in the AWS Parameter Store. For example, with a secrets.env file like the following:

` SOME_SECRET=arn:aws:ssm:<aws region>:<aws account id>:parameter/KEY_OF_SECRET_IN_PARAMETER_STORE`

$ ecs deploy my-cluster my-service --secret-env-file webserver env/secrets.env

This will modify the webserver container definition and add or overwrite the environment variable SOME_SECRET with the value of the KEY_OF_SECRET_IN_PARAMETER_STORE in the AWS Parameter Store of the AWS Systems Manager.

Set a docker label

To add a new or adjust an existing docker labels of a specific container, run the following command:

$ ecs deploy my-cluster my-service -d webserver somelabel somevalue

This will modify the webserver container definition and add or overwrite the docker label "somelabel" with the value "somevalue". This way you can add new or adjust already existing docker labels.

Adjust multiple docker labels

You can add or change multiple docker labels at once, by adding the -d (or --docker-label) options several times:

$ ecs deploy my-cluster my-service -d webserver somelabel somevalue -d webserver otherlabel othervalue -d app applabel appvalue

This will modify the definition of two containers. The webserver's docker label "somelabel" will be set to "somevalue" and the label "otherlabel" to "othervalue". The app's docker label "applabel" will be set to "appvalue".

Set docker labels exclusively, remove all other pre-existing docker labels

To reset all existing docker labels of a task definition, use the flag --exclusive-docker-labels :

$ ecs deploy my-cluster my-service -d webserver somelabel somevalue --exclusive-docker-labels

This will remove all other existing docker labels of all containers of the task definition, except for the label "somelabel" with the value "somevalue" in the webserver container.

Modify a command

To change the command of a specific container, run the following command:

$ ecs deploy my-cluster my-service --command webserver "nginx"

This will modify the webserver container and change its command to "nginx". If you have a command that requires arguments as well, then you can simply specify it like this as you would normally do:

$ ecs deploy my-cluster my-service --command webserver "ngnix -c /etc/ngnix/ngnix.conf"

This works fine as long as any of the arguments do not contain any spaces. In case arguments to the command itself contain spaces, then you can use the JSON format:

$ ecs deploy my-cluster my-service --command webserver '["sh", "-c", "while true; do echo Time files like an arrow $(date); sleep 1; done;"]'

More about this can be looked up in documentation. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definitions

Set a task role

To change or set the role, the service's task should run as, use the following command:

$ ecs deploy my-cluster my-service -r arn:aws:iam::123456789012:role/MySpecialEcsTaskRole

This will set the task role to "MySpecialEcsTaskRole".

Set CPU and memory reservation

  • Set the cpu value for a task: --task-cpu 0.
  • Set the cpu value for a task container: --cpu <container_name> 0.
  • Set the memory value (hard limit) for a task: --task-memory 256.
  • Set the memory value (hard limit) for a task container: --memory <container_name> 256.
  • Set the memoryreservation value (soft limit) for a task definition: --memoryreservation <container_name> 256.

Set privileged or essential flags

  • Set the privileged value for a task definition: --privileged <container_name> True|False.
  • Set the essential value for a task definition: --essential <container_name> True|False.

Set logging configuration

Set the logConfiguration values for a task definition:

--log <container_name> awslogs awslogs-group <log_group_name>
--log <container_name> awslogs awslogs-region <region>
--log <container_name> awslogs awslogs-stream-prefix <stream_prefix>

Set port mapping

  • Set the port mappings values for a task definition: --port <container_name> <container_port> <host_port>.
    • Supports --exclusive-ports.
    • The protocol is fixed to tcp.

Set volumes & mount points

  • Set the volumes values for a task definition --volume <volume_name> /host/path.
    • <volume_name> can then be used with --mount.
  • Set the mount points values for a task definition: --mount <container_name> <volume_name> /container/path.
    • Supports --exclusive-mounts.
    • <volume_name> is the one set by --volume.
  • Set the ulimits values for a task definition: --ulimit <container_name> memlock 67108864 67108864.
    • Supports --exclusive-ulimits.
  • Set the systemControls values for a task definition: --system-control <container_name> net.core.somaxconn 511.
    • Supports --exclusive-system-controls.
  • Set the healthCheck values for a task definition: --health-check <container_name> <command> <interval> <timeout> <retries> <start_period>.

Set Health Checks

  • Example --health-check webserver "curl -f http://localhost/alive/" 30 5 3 0

Placeholder Container

  • Add placeholder containers: --add-container <container_name>.
  • To comply with the minimum requirements for a task definition, a placeholder container is set like this:
    • The container name is <container_name>.
    • The container image is PLACEHOLDER.
    • The container soft limit is 128.
  • The idea is to set sensible values with the deployment.

It is possible to add and define a new container with the same deployment:

--add-container redis --image redis redis:6 --port redis 6379 6379

Remove containers

  • Containers can be removed: --remove-container <container_name>.
    • Leaves the original containers, if all containers would be removed.

All but the container flags can be used with ecs deploy and ecs cron. The container flags are used with ecs deploy only.

Ignore capacity issues

If your cluster is undersized or the service's deployment options are not optimally set, the cluster might be incapable to run blue-green-deployments. In this case, you might see errors like these:

ERROR: (service my-service) was unable to place a task because no container instance met all of its requirements. The closest matching (container-instance 123456-1234-1234-1234-1234567890) is already using a port required by your task. For more information, see the Troubleshooting section of the Amazon ECS Developer Guide.

There might also be warnings about insufficient memory or CPU.

To ignore these warnings, you can run the deployment with the flag --ignore-warnings:

$ ecs deploy my-cluster my-service --ignore-warnings

In that case, the warning is printed, but the script continues and waits for a successful deployment until it times out.

Deployment timeout

The deploy and scale actions allow defining a timeout (in seconds) via the --timeout parameter. This instructs ecs-deploy to wait for ECS to finish the deployment for the given number of seconds.

To run a deployment without waiting for the successful or failed result at all, set --timeout to the value of -1.

Multi-Account Setup

If you manage different environments of your system in multiple differnt AWS accounts, you can now easily assume a deployment role in the target account in which your ECS cluster is running. You only need to provide --account with the AWS account id and --assume-role with the name of the role you want to assume in the target account. ecs-deploy automatically assumes this role and deploys inside your target account:

Example:

$ ecs deploy my-cluster my-service --account 1234567890 --assume-role ecsDeployRole

Scaling

Scale a service

To change the number of running tasks and scale a service up and down, run this command:

$ ecs scale my-cluster my-service 4

Running a Task

Run a one-off task

To run a one-off task, based on an existing task-definition, run this command:

$ ecs run my-cluster my-task

You can define just the task family (e.g. my-task) or you can run a specific revision of the task-definition (e.g. my-task:123). And optionally you can add or adjust environment variables like this:

$ ecs run my-cluster my-task:123 -e my-container MY_VARIABLE "my value"

Run a task with a custom command

You can override the command definition via option -c or --command followed by the container name and the command in a natural syntax, e.g. no conversion to comma-separation required:

$ ecs run my-cluster my-task -c my-container "python some-script.py param1 param2"

The JSON syntax explained above regarding modifying a command is also applicable here.

Run a task in a Fargate Cluster

If you want to run a one-off task in a Fargate cluster, additional configuration is required, to instruct AWS e.g. which subnets or security groups to use. The required parameters for this are:

  • launchtype
  • securitygroup
  • subnet
  • public-ip

Example:

$ ecs run my-fargate-cluster my-task --launchtype=FARGATE --securitygroup sg-01234567890123456 --subnet subnet-01234567890123456 --public-ip

You can pass multiple subnet as well as multiple securitygroup values. the public-ip flag determines, if the task receives a public IP address or not. Please see ecs run --help for more details.

Monitoring

With ECS deploy you can track your deployments automatically. Currently only New Relic is supported:

New Relic

To record a deployment in New Relic, you can provide the the API Key (Attention: this is a specific REST API Key, not the license key) and the application id in two ways:

Via cli options:

$ ecs deploy my-cluster my-service --newrelic-apikey ABCDEFGHIJKLMN --newrelic-appid 1234567890

Or implicitly via environment variables NEW_RELIC_API_KEY and NEW_RELIC_APP_ID :

$ export NEW_RELIC_API_KEY=ABCDEFGHIJKLMN
$ export NEW_RELIC_APP_ID=1234567890
$ ecs deploy my-cluster my-service

Optionally you can provide additional information for the deployment:

  • --comment "New feature X" - comment to the deployment
  • --user john.doe - the name of the user who deployed with
  • --newrelic-revision 1.0.0 - explicitly set the revision to use for the deployment

Note: If neither --tag nor --newrelic-revision are provided, the deployment will not be recorded.

Troubleshooting

If the service configuration in ECS is not optimally set, you might be seeing timeout or other errors during the deployment.

Timeout

The timeout error means, that AWS ECS takes longer for the full deployment cycle then ecs-deploy is told to wait. The deployment itself still might finish successfully, if there are no other problems with the deployed containers.

You can increase the time (in seconds) to wait for finishing the deployment via the --timeout parameter. This time includes the full cycle of stopping all old containers and (re)starting all new containers. Different stacks require different timeout values, the default is 300 seconds.

The overall deployment time depends on different things:

  • the type of the application. For example node.js containers tend to take a long time to get stopped. But nginx containers tend to stop immediately, etc.
  • are old and new containers able to run in parallel (e.g. using dynamic ports)?
  • the deployment options and strategy (Maximum percent > 100)?
  • the desired count of running tasks, compared to
  • the number of ECS instances in the cluster

Alternative Implementation

There are some other libraries/tools available on GitHub, which also handle the deployment of containers in AWS ECS. If you prefer another language over Python, have a look at these projects:

Shell

ecs-deploy - https://github.com/silinternational/ecs-deploy

Ruby

broadside - https://github.com/lumoslabs/broadside

ecs-deploy's People

Contributors

blytheaw avatar cgrice avatar emma-plutoflume avatar ero5004 avatar fabfuel avatar fran-rg avatar ianrodrigues avatar jemisonf avatar jgmchan avatar knaveofdiamonds avatar krzysztof-plutoflume avatar lalvarezguillen avatar maks3w avatar markotibold avatar mjmayer avatar mohsinhijazee avatar nitrocode avatar normoes avatar omaraltayyan avatar pbthorste avatar petemill avatar snordhausen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ecs-deploy's Issues

Revert active task definition in case of deploy failure

Currently if you have failed deployment run you'll face with situation when AWS ECS constantly tries to deploy broken version. To prevent this we can automatically, before script exit, switch service to the last working task definition and prevent desperate ECS deployment attempts.

Deploy timeouts

I'm getting timeouts when trying to deploy to a cluster, I'm using

ecs deploy <cluster> <service> -i <service> repo/foo:bar

I'm running the following

NAME="Container Linux by CoreOS" ID=coreos VERSION=1298.7.0 VERSION_ID=1298.7.0 BUILD_ID=2017-03-31-0215 PRETTY_NAME="Container Linux by CoreOS 1298.7.0 (Ladybug)" ANSI_COLOR="38;5;75" HOME_URL="https://coreos.com/" BUG_REPORT_URL="https://github.com/coreos/bugs/issues"

I'm using the latest image from Docker hub for the ecs-agent, it basically deploys fine but doesn't remove the old task so the deploy never finishes. Everything in the ECS agent log looks fine to me, any pointers?

Old task definition is marked as inactive

Hello! Thanks for the great tool!
We currently use it with the option:

Deploy a new version of your service:
$ ecs deploy my-cluster my-service --tag 1.2.3 --timeout -1

It works fine, but the previous task definition is marked as INACTIVE with such option.
Is it possible to have an option to leave the previous task definition in ACTIVE status for rollback use cases?

Kill running tasks when deploying new task definition

Right now when you deploy a new task definition on an existing service, if you don't have enough capacity on the cluster to place the new task, the script will just exit with this message
"ERROR: (service foo) was unable to place a task because no container instance met all of its requirements. The closest matching (container-instance instance-id-bar) has insufficient CPU units available. For more information, see the Troubleshooting section of the Amazon ECS Developer Guide"
This happens even if you set the "Minimum healthy percent" to 0, so the container will eventually be deployed, but you don't know when.
In this case I think it would be a good idea to have a new flag to kill the existing tasks right after the service has been updated, that way the new one will be placed without any issues

NOTE: this happens very often on dev/qa environments, where you don't want to assign too much resources to the clusters.

--tag can get confused if the previous tag was bad and had ":" in it

I mistakenly pushed a --tag project:latest, instead of --tag latest. I then got stuck in a loop of the new image being project:latest:latest.

Suggest changing this line to a split instead of rsplit:

image_definition = container[u'image'].rsplit(u':', 1)

However, the fact that it is an rsplit() tells me it was done deliberately, maybe hoping for a port or something. Maybe using urllib.parse.urlparse is better? The ideal behaviour IMO would error out before attempting to deploy anyway.

Load environment variables from file

I use ecs deploy in combination with gitlab-ci.yml.

In some cases the actual deployment line in the file can be very long and confusing. This is due to an extended external configuration of the docker containers by using environment variables.

I did not find an option to set those variables up in a file and let ecs deploy just use this file instead of a long list of single environment variables.


The option could be called something like --env_file.

What do you think?

Ability to keep old task definitions

Currently this script deletes old task definition during deploy. So it's not possible to rollback to the old one. It would be great to have an option which allows me to keep old task definitions.

P.S.: we're attending to widely use this script for our infrastructure so we can provide some help with PRs if needed :)

Dealing with run task throttling

Hi,

tl;dr - would you be willing to take a pull request for implementing a retry on run-task when throttled?

We've been using your library and are occasionally running into errors with regard to rate limit/throttling when running a task. Fargate likely exacerbates this issue for us -- I think it has more strict limits.

I was looking at the code and it looks pretty straight-forward to add support for retry on rate exceeded errors, up to a configurable limit. I was thinking default to something like 5 retries, 2 seconds wait for the first retry, and doubling that wait each consecutive retry (once the retry limit is reached, the rate error just gets re-raised). I have most of the concept ready to test, I'm just trying to get all the tox and virtualenv stuff working in Docker (I'm on a Windows machine).

Thanks

taskRoleArn Error

Hello,

Getting the error below when trying to deploy.

ecs deploy --profile my_profile --region us-east-1 my_cluster my_service --tag rel-0.1

Creating new task definition revision
Parameter validation failed:
Unknown parameter in input: "taskRoleArn", must be one of: family, containerDefinitions, volumes

deregister option

I installed ecs deploy via pip and from source, but it seems I still don't have the deregister option. Is there any timeline when this option will be available?

Record deployments in Datadog

Hi there.

This project looks quite promising and I'm keen to try using it for our ECS deployments. We use Datadog for our deployments though, and rather than slapping some bash scripts on top of this library I'd be keen to have some native support for Datadog events.

What would be involved in this? I could create a PR, just need to discuss the design for this I guess.

Cheers!

Update service without making new task definition

Is there any way I can update the service without creating new task definition (I couldn't find anything in the documentation)?
Since I am updating only the docker image and always using the latest tag, there is no need for new task definition.
Via the AWS Console this can be done by checking the force new deployment option when updating the service.

Thanks,
Nikola.

ImportError: No module named tz

Traceback (most recent call last):
  File "/usr/local/bin/ecs", line 7, in <module>
    from ecs_deploy.cli import ecs
  File "/usr/local/lib/python2.7/site-packages/ecs_deploy/cli.py", line 11, in <module>
    from ecs_deploy.ecs import DeployAction, ScaleAction, RunAction, EcsClient, \
  File "/usr/local/lib/python2.7/site-packages/ecs_deploy/ecs.py", line 5, in <module>
    from dateutil.tz.tz import tzlocal
ImportError: No module named tz

https://github.com/fabfuel/ecs-deploy/blob/master/ecs_deploy/ecs.py#L5

- from dateutil.tz.tz import tzlocal
+ from dateutil.tz import tzlocal

New task definitions lose networkMode and placementConstraint attributes

The network mode and task placement constraint parameters of task definitions are relatively recent additions. It looks like this tool isn't set up to handle them presently. When I run an ecs deploy the new task definition loses the networkMode: host I originally had on the prior task definition.

A quick fix would probably be to just update register_task_definition and update_task_definition in ecs.py to add the new attributes. A longer-term one might be to rearrange things so that the new definition is a complete copy of the old (i.e. no matter what new attributes get added in the future), with targeted mutations of only what's necessary to effect the deployment.

Ability to not fail deployment if 'unable to place a task because no container instance met all of its requirements'

Okay, I want to use ecs-deploy in a rolling release setup. I run 3 container instances and this particular service is load balanced across 2 tasks.

When running ecs deploy, the command fails on my command line with 'Deployment failed' because of an error similar to this:

{
  '2016-12-28T14:26:44.793000+01:00': 'ERROR: (service more-services4-MyService-130NUIEVKWEKO) was unable to place a task because no container instance met all of its requirements. The closest matching (container-instance b33c7bbd-365a-4789-ad8f-2c4316e28965) is already using a port required by your task. For more information, see the Troubleshooting section of the Amazon ECS Developer Guide.'
}

however, 5 minutes later, the new TaskDefinition is active, steady, healthy and serving because AWS is doing a rolling release (deploy new task to 1 machine, switch the ELB to that, switch off one of the old tasks, deploy the second new task, ...)

Bottom line is: I'd like to ignore this error, as in my setup it's only a intermediate error.

So, I patched my local version like this:

diff --git a/ecs_deploy/cli.py b/ecs_deploy/cli.py
index 23dbc54..8d99017 100644
--- a/ecs_deploy/cli.py
+++ b/ecs_deploy/cli.py
@@ -157,11 +157,20 @@ def wait_for_finish(action, timeout, title, success_message, failure_message):
     waiting_timeout = datetime.now() + timedelta(seconds=timeout)
     while waiting and datetime.now() < waiting_timeout:
         sleep(1)
-        click.secho('.', nl=False)
         service = action.get_service()
-        waiting = not action.is_deployed(service) and not service.errors

-    if waiting or service.errors:
+        service_errors = dict(service.errors.items())
+
+        for error_key, error_message in dict(service_errors.items()).items():
+            if 'is already using a port required by your task' in error_message:
+                click.secho('o', nl=False)
+                del service_errors[error_key]
+
+        click.secho('.', nl=False)
+
+        waiting = not action.is_deployed(service) and not service_errors
+
+    if waiting or service_errors:
         print_errors(service, waiting, failure_message)
         exit(1)

Admittedly, this is not the most beautiful code because I wanted to ask for a heads up whether that would be something you'd want to integrate in ecs-deploy at all.

If so, should it be configurable ?

Deploy without timeout or wait

Hi,

I want to thank you guys for this fantastic utility program to simplify deployment on ECS. Are there plans to support deployments without a timeout option?

For certain deployments (especially launch type Fargate) it appears ECS Cluster manager takes a while to shutdown/deregister from ASG and reach steady state. The default 300s is not really enough, but it is hard to know when to arbitrarily increase it to

In certain projects, it may be better to not have a timeout. Is this an functionality being considered?

Enable rollback feature

Hello, I was thinking it would be nice to add a rollback feature. This means that it will deregister the previous task definition after the deploy is marked a healthy, otherwise it will rollback to the previous one after the TIMEOUT

Add --profile for AWS

I use profiles to login to aws:
$(aws ecr get-login --profile dev)

Would it be possible to add an option --profile to send it to aws with ecr-deploy?

For example:
aws ecs describe-task-definition --task-definition web-service --profile dev
is working.

aws ecs describe-task-definition --task-definition web-service
not (An error occurred (ClientException) when calling the DescribeTaskDefinition operation: Unable to describe task definition.).

How can I wait for running task to stop?

Hi there.
I want to wait for a running task to stop after running one-shot task using aws ecs wait tasks-stopped.
To do so, I have to pass the task ARN to awscli but the ecs-deploy doesn't output task ARN.
I think it's useful if the ecs-deploy outputs ARN of the task which is run by itself.

`ecs run ...` does not work with Fargate

I get the following when I try to run ecs run ... with a Fargate cluster.

An error occurred (InvalidParameterException) when calling the RunTask operation: Network Configuration must be provided when networkMode 'awsvpc' is specified.

__init__() takes at least 8 arguments (7 given)

Everytime I do a deploy I get the following response:
__init__() takes at least 8 arguments (7 given)

command ran is: ecs deploy test-cluster test-service . Though I've tried multiple combinations of different options flags, but it always returns the error above.

Any insight would be great, Thanks.

Secrets are removed with ecs deploy

Hello! Thanks again for the great tool
I use version 1.7.0 and I have the next problem.

ecs deploy $cluster_name $APP_NAME -t $version --no-deregister --timeout -1

During the execution of the command no secrets information is copied from previous container definition to the new one
Wish you a good day!

Python API

Have you given any consideration to creating a python package that (roughly) replicates the functionality of the CLI?

I've got a (stripped-down) fork that I've been using with AWS Lambda, and was wondering if there's any interest in incorporating those changes upstream.

timeout

Creating new task definition revision
Successfully created revision: 22

Updating service
Successfully changed task definition to: TD-WEBAPP-DEV:22

Deploying new task definition..................................................................................................................................................................................................................................
Deployment failed due to timeout. Please see: https://github.com/fabfuel/ecs-deploy#timeout

Empty variables used in --command

I use ecs deploy a lot with the gitlab ci and every now and then I come across the need of simplifying the stages, i.e. getting similar bits into a base stage.

The problem with this is, I use environment varibales to populate, among others, the --command string.

Problem:
In case an empty variable is at the very beginning of this --command string, the resulting CMD section becomes something like this

"", --any-option 1, ...

This something, that is not supported in AWS ECS.

It worrks without any issues in case the empty variable is somewhere in between the options of the --command string.

Solution:
Remove empty variables from the command list.

I already created a pull request (cause it worked so well last time ๐Ÿฅ‡ )

Updating ECS scheduled tasks

(First off - thanks for the helpful tool that saved me a load of time).

We use scheduled tasks on ECS and think it would be useful if we could update scheduled tasks with this tool. When we push a new docker image we'd like to also use your tool to update these.

I believe using the current aws cli you would do:

aws ecs register-task-definition # task def with new image
aws events put-targets # point existing rule at new task def

It kind of breaks your current CLI, so we'd have to think about that.

Is this something you would consider a PR for?

Single task for staging, update with new image

Hi there,

Thanks for the tool, I quickly used the incredibly handy ecs deploy my-cluster my-service --tag 1.2.3 to update 2 containers running behind an ELB. Besides the fact it takes around 6 minutes (seems this is to do with AWS though?!) all worked fine.

My question is: can i use this cli to update a single container (no load balancing) with downtime?
For staging I simply want to bring down the old task and bring up a new one. If I try this using ecs deploy my-cluster my-service --tag 1.2.3 I see ECS complains due to the lack of dynamic port mapping. Ie both tasks use same port rather than the old task is brought down and the new task is brought up. I have tried even when setting healthy to 0 but it seems the same issue occurs.

Any thoughts, ideas appreciated. Thanks,

Unknown parameter in input: "executionRoleArn"

Hi, i tried to deploy but got error:

Creating new task definition revision
Traceback (most recent call last):
File "/usr/local/bin/ecs", line 11, in
sys.exit(ecs())
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 722, in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/ecs_deploy/cli.py", line 81, in deploy
new_td = create_task_definition(deployment, td)
File "/usr/local/lib/python2.7/site-packages/ecs_deploy/cli.py", line 291, in create_task_definition
new_td = action.update_task_definition(task_definition)
File "/usr/local/lib/python2.7/site-packages/ecs_deploy/ecs.py", line 545, in update_task_definition
additional_properties=task_definition.additional_properties
File "/usr/local/lib/python2.7/site-packages/ecs_deploy/ecs.py", line 65, in register_task_definition
**additional_properties
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 312, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 575, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 630, in _convert_to_request_dict
api_params, operation_model)
File "/usr/local/lib/python2.7/site-packages/botocore/validate.py", line 291, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "executionRoleArn", must be one of: family, taskRoleArn, networkMode, containerDefinitions, volumes, placementConstraints

any idea ?

Ability to read Variables from .env files

I am using your amazing tool with jenkins to make my CD, but I'd like to be able to provide variables from
an environment file where I have it tracked which git, can you in a future release a way to support this ?

thanks again for your effort and bring us this amazing tool :)

Set executionRoleArn for task

Please make it possible to add the executionRoleArn to a task.

It is already possible for taskRoleArn using --role or -r.

Here you can find how to do it in boto3.


I got the error when using secrets for the first time:

botocore.errorfactory.ClientException: An error occurred (ClientException) when calling the RegisterTaskDefinition operation: When you are specifying container secrets, you must also specify a value for 'executionRoleArn'.

ecs deploy doesn't stop old revision running tasks

Hi, i'm using ecs deploy with gitlab-ci, when trying to use it on service with 2 running tasks, it won't stop the tasks, and get timeout on the deployment.

the command im using:
ecs deploy --region ${ECS_REGION} ${CLUSTER_NAME} ${SERVICE_NAME} --deregister --task ${TASK_FAMILY} --user gitlab-ci

the new revision is deployed, and running 2 tasks, but the old tasks doesn't stop.
After the timeout below ends, i got 1 task with revision 24, and 2 tasks with revision 25.

$ ecs deploy --region ${ECS_REGION} ${CLUSTER_NAME} ${SERVICE_NAME} --deregister --task ${TASK_FAMILY}  --user gitlab-ci
Deploying based on task definition: backend-engine

Creating new task definition revision
Successfully created revision: 25

Updating service
Successfully changed task definition to: backend-engine:25

Deploying new task definition........................................................................................................................................................................................................................................................
Deployment failed due to timeout. Please see: https://github.com/fabfuel/ecs-deploy#timeout

ERROR: Job failed: exit code 1

"deployment failed" but tasks are updated

Hi,

First, thanks for this tool. It's been great thus far.

I'm seeing a "Deployment failed" message, but my tasks are updated appropriately. Is there a way to get more information as to what is actually failing?

Creating new task definition revision
Successfully created revision: 18

Updating service
Successfully changed task definition to: scout-prod-td:18

Deploying new task definition...........................

Deployment failed

For what it's worth, I have two identical cluster (beta + staging) that run successfully with the same configurations.

Thanks!

ImportError: No module named boto3.session

$ pip install boto3 ecs-deploy
Collecting boto3
Downloading boto3-1.4.7-py2.py3-none-any.whl (128kB)
Collecting ecs-deploy
Downloading ecs-deploy-1.4.0.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-build-vWl8tn/ecs-deploy/setup.py", line 6, in
from ecs_deploy.ecs import VERSION
File "ecs_deploy/ecs.py", line 3, in
from boto3.session import Session
ImportError: No module named boto3.session

ECS service Daemon mode

Hello, i set the Service in "DAEMON" mode meaning the # of desired tasks is Automatic (Top right corner at the screen-shot)

screenshot from 2018-07-02 13-16-44

However i'm getting an error when deploying a new tag

Command

ecs deploy --region ${AWS_REGION} ${CLUSTER_NAME} ${ECS_SERVICE_NAME} --tag ${CI_COMMIT_SHA} --timeout 1200

Error:

Creating new task definition revision
Successfully created revision: 6

Updating service
An error occurred (InvalidParameterException) when calling the UpdateService operation: The daemon scheduling strategy does not support a desired count for services. Remove the desired count value and try again

i must be missing something here
can you please advise on this ?

ecs deploy timing out on "Deploying task definition....."

Hi!
I'm having an issue when using the ecs deploy on CircleCI.

After the whole process (create new task def, update service), it freezes on the "Deploying task definition".

ecs deploy $AWS_CLUSTER $AWS_SERVICE -t $AWS_TAG_DEV
Updating task definition
Changed image of container 'xxxxx-frontend' to: "1234.dkr.ecr.us-west-2.amazonaws.com/xxxxx/frontend:dev" (was: "1234.dkr.ecr.us-west-2.amazonaws.com/xxxxx/frontend:dev")

Creating new task definition revision
Successfully created revision: 10
Successfully deregistered revision: 9

Updating service
Successfully changed task definition to: xxxxx-frontend:10

Deploying task definition...................................................

Checking the AWS Console, I can see that the new task is running, with the new task definition - but the old one is not stopped and the desired status is still 'running'.
If I manually stop the old task, the "Deploying task definition" unfreezes and the process finishes with success. The desired status is 1 task for this service, so checking your code I can see that the stop task is never called for the old one.

Am I missing something? Is this an error of configuration on ECS?

Thanks for the project!

Deployment time with blue-green methodology

Hi,
I have big problem with deployment by use ecs-deploy. Of course it works very well, but I encountered situation when I want to deploy my image according to blue-green methodology with 3 active instance in one cluster and managed by one service with desired 2 numbers of the same task. So it's obvious that when I start ecs-deploy it deploy my updated task into empty instance, and next show my warning about not enough space and resources for next deploy: "service was unable to place a task because no container instance met all of its requirements." And AWS throw me this info two times with two time intervals (approximately 8 minutes). I Assume that meanwhile he is trying to deploy and that's true he can find any free instance BUT according to blue-green methodology, after first deploy he should drain next instance and done next deploy. Of course finnaly after two tries and warnings he do it what i want but my pipeline is much longer and in my opinion it's just a waste of time. Maybe there is any option to force ecs to deploy without any info and warning (--ignore-warnings in this particular situation doesn't work)?

error ecs deploy

ecs deploy CL-WebApp-Dev-azA-B SV-WEBAPP-DEV
Traceback (most recent call last):
File "/usr/bin/ecs", line 11, in
load_entry_point('ecs-deploy==1.4.3', 'console_scripts', 'ecs')()
File "/usr/lib/python2.7/site-packages/click/core.py", line 764, in call
return self.main(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/lib/python2.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python2.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python2.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ecs_deploy/cli.py", line 62, in deploy
client = get_client(access_key_id, secret_access_key, region, profile)
File "/usr/lib/python2.7/site-packages/ecs_deploy/cli.py", line 23, in get_client
return EcsClient(access_key_id, secret_access_key, region, profile)
File "/usr/lib/python2.7/site-packages/ecs_deploy/ecs.py", line 15, in init
self.boto = session.client(u'ecs')
File "/usr/lib/python2.7/site-packages/boto3/session.py", line 263, in client
aws_session_token=aws_session_token, config=config)
File "/usr/lib/python2.7/site-packages/botocore/session.py", line 889, in create_client
client_config=config, api_version=api_version)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 76, in create_client
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 291, in _get_client_args
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/usr/lib/python2.7/site-packages/botocore/args.py", line 45, in get_client_args
endpoint_url, is_secure, scoped_config)
File "/usr/lib/python2.7/site-packages/botocore/args.py", line 112, in compute_client_args
service_name, region_name, endpoint_url, is_secure)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 364, in resolve
service_name, region_name)
File "/usr/lib/python2.7/site-packages/botocore/regions.py", line 122, in construct_endpoint
partition, service_name, region_name)
File "/usr/lib/python2.7/site-packages/botocore/regions.py", line 135, in _endpoint_for_partition
raise NoRegionError()
botocore.exceptions.NoRegionError: You must specify a region.

Remove omitted ENV variables on new deployments

We are using ecs deploy to specify environment (and secrets) via command line for new TaskDefinitions. If an older TaskDefinition has some ENV variables that we need to remove from future deployments (by removing them from the command line) they do still show up in the new TaskDefinition - seems the existing ones don't get deleted if removing them from the command line.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.