Code Monkey home page Code Monkey logo

docker-worker's Introduction

Table of Contents generated with DocToc

Docker Worker

Docker task host for linux.

Each task is evaluated in an isolated docker container. Docker has a bunch of awesome utilities for making this work well... Since the images are COW running any number of task hosts is plausible and we can manage their overall usage.

We manipulate the docker hosts through the use of the docker remote api

See the doc site for how to use the worker from an existing worker-type the docs here are for hacking on the worker itself.

Requirements

  • Node 0.12x or greater
  • Docker
  • Packer (to build ami)

Usage

# from the root of this repo) also see --help
node --harmony bin/worker.js <config>

Configuration

The defaults contains all configuration options for the docker worker in particular these are important:

  • taskcluster the credentials needed to authenticate all pull jobs from taskcluster.

  • pulse the credentials for listening to pulse exchanges.

  • registries registry credentials

  • statsd credentials for the statsd server.

Directory Structure

Running tests

The ./test/test.sh script is used to run individual tests which are suffixed with _test.js for example: ./test/test.sh test/integration/live_log_test.js.

# from the top level

./build.sh
npm test

This will build the docker image for the tasks and run the entire suite.

Common problems

  • Time synchronization : if you're running docker in a VM your VM may drift in time... This often results in stale warnings on the queue.

Deployment

The below is a detailed guide to how deployment works if you know what you're doing and just need a check list see: deployment check list

Requirements

  • packer
  • make
  • node 0.12 or greater
  • credentials for required services

Building AMI's

The docker worker deploy script is essentially a wrapper around packer with an interactive configuration script to ensure you're not missing particular environment variables. There are two primary workflows that are important.

  1. Building the base AMI. Do this when:

    • You need to add new apt packages.

    • You need to update docker (see above).

    • You need to run some expensive one-off installation.

    Note that you need to manually update the sourceAMI field in the app.json file after you create a new base AMI.

    Example:

    ./deploy/bin/build base
  2. Building the app AMI. Do this when:

    • You want to deploy new code/features.

    • You need to update diamond/statsd/configs (not packages).

    • You need to update any baked in credentials (these usually can be overriden in the provisioner but sometimes this is desirable).

    Note: That just because you deploy an AMI does not mean anyone is using it.. Usually you need to also update a provisioner workerType with the new AMI id.

    Example:

    ./deploy/bin/build app

Everything related to the deployment of the worker is in the deploy folder which has a number of other important sub folders.

  • deploy/packer : The packer folder contains a list (app/base) of ami(s) which need to be created... Typically you only need to build the "app" ami which is built on a pre-existing base ami (see sourceAMI in app.json).

  • deploy/variables.js : contains the list of variables for the deployment and possible defaults

  • deploy/template : This folder is a mirror of what will be deployed on the server but with mustache like variables (see variables.js for the list of all possible variables) if you need to add a script/config/etc... Add it here in one of the sub folders.

  • deploy/deploy.json : A generated file (created by running deploy/bin/build ) or running make -C deploy this file contains all the variables needed to deploy the application

  • deploy/target : Contains the final files to be uploaded when creating the AMI all template values have been subsituted... It is useful to check this by running make -C deploy prior to building the full ami.

  • deploy/bin/build : The script responsible for invoking packer with the correct arguments and creating the artifacts which need to be uploaded to the AMI)

Block-Device Mapping

The AMI built with packer will mount all available instances storage under /mnt and use this for storing docker images and containers. In order for this to work you must specify a block device mapping that maps ephemeral[0-9] to /dev/sd[b-z].

It should be noted that they'll appear in the virtual machine as /dev/xvd[b-z], as this is how Xen storage devices are named under newer kernels. However, the format and mount script will mount them all as a single partition on /mnt using LVM.

An example block device mapping looks as follows:

  {
  "BlockDeviceMappings": [
      {
        "DeviceName": "/dev/sdb",
        "VirtualName": "ephemeral0"
      },
      {
        "DeviceName": "/dev/sdc",
        "VirtualName": "ephemeral1"
      }
    ]
  }

Updating Schema

Schema changes are not deployed automatically so if the schema has been changed, the run the upload-schema.js script to update.

Before running the upload schema script, ensure that AWS credentials are loaded into your environment. See Configuring AWS with Node

Run the upload-schema.js script to update the schema:

node --harmony bin/upload-schema.js

docker-worker's People

Contributors

djmitche avatar eggylv999 avatar gregarndt avatar jonasfj avatar lightsofapollo avatar lightsofapollo-taskcluster avatar mihneadb avatar petemoore avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.