Code Monkey home page Code Monkey logo

deploykit's Introduction

InfraKit

CircleCI

Go Report Card

InfraKit is a toolkit for infrastructure orchestration. With an emphasis on immutable infrastructure, it breaks down infrastructure automation and management processes into small, pluggable components. These components work together to actively ensure the infrastructure state matches the user's specifications. InfraKit therefore provides infrastructure support for higher-level container orchestration systems and can make your infrastructure self-managing and self-healing.

To get started, try the tutorial, or check out the video below:

InfraKit + LinuxKit POC

infrakit+linuxkit

In this video, InfraKit was used to build a custom linux operating system (based on linuxkit). We then deployed a cluster of virtual machine instances on a local Mac laptop using the Mac Xhyve hypervisor (HyperKit). A cluster of 3 servers booted up in seconds. Later, after the custom OS image has been updated with a new public key, InfraKit detects the change and orchestrates a rolling update of the nodes. We then deploy the same OS image to a bare-metal ARM server running on Packet.net, where the server uses custom ipxe boot directly from the localhost. It demonstrates some of the key concepts and components in InfraKit and shows how InfraKit can be used to implement an integrated workflow from custom OS image creation to cluster deployment and Day N management. The entire demo is published as a playbook, and you can create your own playbooks too.

Use Cases

InfraKit is designed to automate setup and management of infrastructure in support of distributed systems and higher-level container orchestration systems. Some of the use cases we are working on include:

  • Bootstrap / installation of container orchestration systems like Docker Swarm and Kubernetes
  • Cluster autoscaler that can work across a variety of platforms from public clouds (like AWS autoscaling groups) to bare-metal hosts.
  • GPU cluster provisioning
  • Integration with LinuxKit for building and deploying immutable infrastructure from declarative specifications of the entire stack: from infrastructure resources to os / kernel and applications.
  • Day-N management and automation of infrastructure - from provisioning to rolling updates and capacity scaling.

InfraKit has a modular architecture with a set of interfaces which define the interactions of these 'plugin objects'. Plugins are active daemons that cooperate with one another to ensure the infrastructure state matches your specifications.

Plugins

InfraKit makes extensive use of Plugins to manage arbitrary systems in diverse environments, which can be composed to meet different needs. See the plugins documentation for more technical details.

Here is a list of plugins:

Core Implementations

plugin type description
group group core group controller for rolling updates, scale group, etc.
swarm flavor runs Docker in Swarm mode
kubernetes flavor bootstraps a single master kubernetes cluster
combo flavor combine multiple flavor plugins
vanilla flavor manual specification of instance fields
aws instance creates Amazon EC2 instances and other resource types
digitalocean instance creates DigitalOcean droplets
docker instance provisions container via Docker
google instance Google Cloud Platform compute instances
file instance useful for development and testing
hyperkit instance creates HyperKit VMs on Mac OSX
libvirt instance provisions KVM vms via libvirt
maas instance bare-metal provisioning using Ubuntu MAAS
packet instance provisions bare metal hosts on Packet
rackhd instance bare-metal server provisioning via RackHD
terraform instance creates resources using Terraform
vagrant instance creates Vagrant VMs
vsphere instance creates VMWare VMs

Community Implementations

plugin type description
HewlettPackard/infrakit-instance-oneview instance bare-metal server provisioning via HP-OneView
IBM Cloud instance Provisions instances on IBM Cloud via terraform
AliyunContainerService/infrakit.aliyun instance Provisions instances on Alibaba Cloud
1and1/infrakit-instance-oneandone instance Provisions instances on 1&1 Cloud Server
sacloud/infrakit-instance-sakuracloud instance Provisions instances on Sakura Cloud

Have a Plugin you'd like to share? Submit a Pull Request to add yourself to the list!

Building

Your Environment

Make sure you check out the project following a convention for building Go projects. For example,

# Install Go - https://golang.org/dl/
# Assuming your go compiler is in /usr/local/go
export PATH=/usr/local/go/bin:$PATH

# Your dev environment
mkdir -p ~/go
export GOPATH=!$
export PATH=$GOPATH/bin:$PATH

mkdir -p ~/go/src/github.com/docker
cd !$
git clone [email protected]:docker/infrakit.git
cd infrakit

We recommended go version 1.9 or greater for all platforms.

Also install a few build tools:

make get-tools

Running tests

$ make ci

Binaries

$ make binaries

Executables will be placed in the ./build directory. There is only one executable infrakit which can be used as CLI and as server, based on the CLI verbs and flags.

Design

Configuration

InfraKit uses JSON for configuration because it is composable and a widely accepted format for many infrastructure SDKs and tools. Since the system is highly component-driven, our JSON format follows simple patterns to support the composition of components.

A common pattern for a JSON object looks like this:

{
   "SomeKey": "ValueForTheKey",
   "Properties": {
   }
}

There is only one Properties field in this JSON and its value is a JSON object. The opaque JSON value for Properties is decoded via the Go Spec struct defined within the package of the plugin -- for example -- vanilla.Spec.

The JSON above is a value, but the type of the value belongs outside the structure. For example, the default Group Spec is composed of an Instance plugin, a Flavor plugin, and an Allocation:

{
  "ID": "name-of-the-group",
  "Properties": {
    "Allocation": {
    },
    "Instance": {
      "Plugin": "name-of-the-instance-plugin",
      "Properties": {
      }
    },
    "Flavor": {
      "Plugin": "name-of-the-flavor-plugin",
      "Properties": {
      }
    }
  }
}

The group's Spec has Instance and Flavor fields which are used to indicate the type, and the value of the fields follow the pattern of <some_key> and Properties as shown above.

The Allocation determines how the Group is managed. Allocation has two properties:

  • Size: an integer for the number of instances to maintain in the Group
  • LogicalIDs: a list of string identifiers, one will be associated with each Instance

Exactly one of these fields must be set, which defines whether the Group is treated as 'cattle' (Size) or 'pets' (LogicalIDs). It is up to the Instance and Flavor plugins to determine how to use LogicalID values.

As an example, if you wanted to manage a Group of NGINX servers, you could write a custom Group plugin for ultimate customization. The most concise configuration looks something like this:

{
  "ID": "nginx",
  "Plugin": "my-nginx-group-plugin",
  "Properties": {
    "port": 8080
  }
}

However, you would likely prefer to use the default Group plugin and implement a Flavor plugin to focus on application-specific behavior. This gives you immediate support for any infrastructure that has an Instance plugin. Your resulting configuration might look something like this:

{
  "ID": "nginx",
  "Plugin": "group",
  "Properties": {
    "Allocation": {
      "Size": 10
    },
    "Instance": {
      "Plugin": "aws",
      "Properties": {
        "region": "us-west-2",
        "ami": "ami-123456"
      }
    },
    "Flavor": {
      "Plugin": "nginx",
      "Properties": {
        "port": 8080
      }
    }
  }
}

Once the configuration is ready, you will tell a Group plugin to

  • watch it
  • update it
  • destroy it

Watching the group as specified in the configuration means that the Group plugin will create the instances if they don't already exist. New instances will be created if for any reason existing instances have disappeared such that the state doesn't match your specifications.

Updating the group tells the Group plugin that your configuration may have changed. It will then determine the changes necessary to ensure the state of the infrastructure matches the new specification.

Docs

Additional documentation can be found here.

Reporting security issues

The maintainers take security seriously. If you discover a security issue, please bring it to their attention right away!

Please DO NOT file a public issue, instead send your report privately to [email protected].

Security reports are greatly appreciated and we will publicly thank you for it. We also like to send gifts—if you're into Docker schwag, make sure to let us know. We currently do not offer a paid security bounty program, but are not ruling it out in the future.

Design goals

InfraKit is currently focused on supporting setup and management of base infrastructure, such as a cluster orchestrator. The image below illustrates an architecture we are working towards supporting - a Docker cluster in Swarm mode.

arch image

This configuration co-locates InfraKit with Swarm manager nodes and offers high availability of InfraKit itself and Swarm managers (using attached storage). InfraKit is shown managing two groups - managers and workers that will be continuously monitored, and may be modified with rolling updates.

Countless configurations are possible with InfraKit, but we believe achieving support for this configuration will enable a large number of real-world use cases.

Copyright and license

Copyright © 2016 Docker, Inc. All rights reserved. Released under the Apache 2.0 license. See LICENSE for the full license text.

deploykit's People

Contributors

adrianimboden avatar akihirosuda avatar anonymuse avatar avsm avatar craigyam avatar davefreitag avatar dgageot avatar ericvn avatar frenchben avatar friism avatar hekaldama avatar jacobfrericks avatar jalateras avatar joeabbey avatar johnmccabe avatar justincormack avatar kaufers avatar kencochrane avatar linsun avatar mgrachev avatar ndegory avatar nwt avatar ohmk avatar rafaelhdr avatar rneugeba avatar thebsdbox avatar vanuan avatar wfarner avatar yamamoto-febc avatar yujioshima avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deploykit's Issues

Add task for swarm join in provision workflow

Currently in the Editions / 1.12 TP2 the behavior of a node when provisioned is that it automatically joins the cluster via

docker swarm join <master_ip>:4500

which is executed via the user data for the instance when the Moby instance comes up (as part of an autoscaling group).

For other environments where autoscaling is absent (pretty much everyone else outside of AWS, Azure, GCE), machete needs to run the cluster join as a task in the provisioning workflow.

Assumptions:

  • Same cluster (no provision and join arbitrary swarms)
  • Configuration / manager IP address would have to be passed in as a configuration parameter in the machine template. Editions or some other client can generate / update the machine template once the Swarm manager IP is known.

Fails to build with Go 1.6 on MacOS Sierra

go version go1.6.2 darwin/amd64
make -k infrakit
+ clean
mkdir -p ./infrakit
rm -rf ./infrakit/*
+ vendor-sync
+ build
+ infrakit
# github.com/docker/infrakit/vendor/golang.org/x/text/unicode/norm
fatal error: unexpected signal during runtime execution
[signal 0xb code=0x1 addr=0x1f41ead02f9e pc=0xf0eb]

runtime stack:
runtime.throw(0x4971e0, 0x2a)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/panic.go:547 +0x90
runtime.sigpanic()
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/sigpanic_unix.go:12 +0x5a
runtime.unlock(0x984540)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/lock_sema.go:107 +0x14b
runtime.(*mheap).alloc_m(0x984540, 0x1, 0xf, 0xc820000180)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/mheap.go:492 +0x314
runtime.(*mheap).alloc.func1()
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/mheap.go:502 +0x41
runtime.systemstack(0xc820453e58)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/asm_amd64.s:307 +0xab
runtime.(*mheap).alloc(0x984540, 0x1, 0x1000000000f, 0xed8f)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/mheap.go:503 +0x63
runtime.(*mcentral).grow(0x985ea0, 0x0)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/mcentral.go:209 +0x93
runtime.(*mcentral).cacheSpan(0x985ea0, 0xc820442000)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/mcentral.go:89 +0x47d
runtime.(*mcache).refill(0xaf6e10, 0xc80000000f, 0xdf3070)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/mcache.go:119 +0xcc
runtime.mallocgc.func2()
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/malloc.go:642 +0x2b
runtime.systemstack(0xc820020000)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/asm_amd64.s:291 +0x79
runtime.mstart()
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/proc.go:1051

goroutine 1 [running]:
runtime.systemstack_switch()
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/asm_amd64.s:245 fp=0xc822bb50f8 sp=0xc822bb50f0
runtime.mallocgc(0xe0, 0x425e40, 0x1, 0xc8230d9ea0)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/malloc.go:643 +0x869 fp=0xc822bb51d0 sp=0xc822bb50f8
runtime.newobject(0x425e40, 0xc8230d9ea0)
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/malloc.go:781 +0x42 fp=0xc822bb51f8 sp=0xc822bb51d0
cmd/compile/internal/gc.regopt.func1(0x0, 0x0)
    /usr/local/Cellar/go/1.6.2/libexec/src/cmd/compile/internal/gc/reg.go:1064 +0x2f fp=0xc822bb5210 sp=0xc822bb51f8
cmd/compile/internal/gc.Flowstart(0xc8230e59e0, 0x4c4778, 0x20)
    /usr/local/Cellar/go/1.6.2/libexec/src/cmd/compile/internal/gc/popt.go:288 +0x90c fp=0xc822bb5320 sp=0xc822bb5210
cmd/compile/internal/gc.regopt(0xc8230e59e0)
    /usr/local/Cellar/go/1.6.2/libexec/src/cmd/compile/internal/gc/reg.go:1064 +0x3d0 fp=0xc822bb57d8 sp=0xc822bb5320
cmd/compile/internal/gc.compile(0xc8202eb710)
    /usr/local/Cellar/go/1.6.2/libexec/src/cmd/compile/internal/gc/pgen.go:521 +0xe6d fp=0xc822bb5a48 sp=0xc822bb57d8
cmd/compile/internal/gc.funccompile(0xc8202eb710)
    /usr/local/Cellar/go/1.6.2/libexec/src/cmd/compile/internal/gc/dcl.go:1450 +0x1c0 fp=0xc822bb5ac0 sp=0xc822bb5a48
cmd/compile/internal/gc.Main()
    /usr/local/Cellar/go/1.6.2/libexec/src/cmd/compile/internal/gc/lex.go:472 +0x2116 fp=0xc822bb5de0 sp=0xc822bb5ac0
cmd/compile/internal/amd64.Main()
    /usr/local/Cellar/go/1.6.2/libexec/src/cmd/compile/internal/amd64/galign.go:127 +0x58d fp=0xc822bb5e48 sp=0xc822bb5de0
main.main()
    /usr/local/Cellar/go/1.6.2/libexec/src/cmd/compile/main.go:32 +0x395 fp=0xc822bb5f20 sp=0xc822bb5e48
runtime.main()
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/proc.go:188 +0x2b0 fp=0xc822bb5f70 sp=0xc822bb5f20
runtime.goexit()
    /usr/local/Cellar/go/1.6.2/libexec/src/runtime/asm_amd64.s:1998 +0x1 fp=0xc822bb5f78 sp=0xc822bb5f70
make: *** [infrakit] Error 1

rename the binaries

Currently the binaries built by infrakit have unhelpful names, eg cli, file (conflicts with common unix command), vagrant (its not vagrant) etc. The simplest solution is to just prefix them all infrakit-... not sure about other options.

For the plugins maybe it doesn't matter, but then we should remove the make install from the Makefile.

Add documentation for implementing a new plugin

I am currently at the infrakit BoF in Berlin, I am playing around with it for the first time.
I want to create a flavor plugin for a specific stack. I cannot find anywhere documentation for when you are writing a plugin.
I am checking the source code from other flavors trying to miss nothing for my implementation.

It could be great to document a plugin implementation documentation for (future) contributors (with the requirements for a group or instance or flavor plugin).

Introduce a plugin type to manage Group state

The default Group plugin currently only has in-memory state for the Groups being watched. Introduce a new plugin type that the Group plugin is configured with, which delegates state management.

Validation of input data structures

We need a mechanism that can validate data structures in a simple, unified way without requiring too much work on the provisioner writers:

  • Provide a syntax for describing if a struct field is required / optional
  • Verify that in a given structure, we can detect missing values and inform the client code that certain fields are missing if they are required.

This will be important to account for some YAML parsing bugs that are known (e.g. skipping unrecognized fields without errors: [https://github.com/go-yaml/yaml/issues/136])

Some implementation ideas:

  • Use struct tags to allow driver / provisioner writer to indicate that a field is required or optional
  • Recommend using pointers as field types to disambiguate nil or zero values.

Example: We could introduce a tag check that specifies requiredness of a field:

type MyExampleRequest struct {
    Name *string `check:"required,no_zero"`
    LastName *string `check:optional"`
    Age *uint32 `check:"required,no_zero"`
}

Here the developer decrees that Name is a required field and cannot be an empty string, while LastName can be optional and Age must exist and cannot be 0.

With this specification, we should be able to do validation check with a helper function like this:

    missing := []string{}
    err := Validate(myStruct, func(fieldName string, fieldValue interface{}) {
              // This is a callback when the field is either missing or have unacceptable zero value.
             missing = append(missing, fieldName)
    })
    if err != nil {
        panic(fmt.Errorf("Bad input %v.  Missing fields: %v", err, missing))
    }

There may be other validation rules. Please comment in this thread.

Create a provider interface

Create the interface that will define the functions a provider may provide to provision machines. For the moment, this will only support creation of machines.

Do not use concrete error type in function returns

Currently the provisioner interface contains functions that instead of using the error interface type, return pointers to the concrete error type:

https://github.com/docker/libmachete/blob/master/spi/instance/spi.go#L8

This will actually cause problems when the spi is wrapped by others who follow golang convention and return error in their functions. Here's a concrete example of this being a problem:

https://play.golang.org/p/6KM5W2mRT3

We should always return error and not be returning pointers to concrete error type.

Background: https://speakerdeck.com/campoy/understanding-nil and https://github.com/gophercon/2016-talks/blob/master/DaveCheney-DontCheckErrorsHandleThemGracefully/GopherCon%202016.pdf

Instead of declaring a public error type with error code and message, we should make this type package private and provide helper functions to return an appropriate HTTP response code if the caller cares about HTTP. Or have helper functions that allow the caller to reason about the nature / context of the error. Inside the helper functions we then do type assertions.

Manage secrets - SSH keys for machines

A few points on managing machine secrets

  • This should be separate from managing provider access tokens / credentials
  • Need to scale from local desktop usage to enterprise / sharing / secrets management without impacting user experience
  • If the user starts out with the single Engine and then slowly scale up to a cluster / enterprise scenario, how to migrate this seamlessly?
  • Support for password-enabled SSH key? What does that integration look like for outside of local desktop UX?

Implementation Notes

Consider gomock alternatives

We've found that gomock introduces more problems than it solves. I'd recommend removing it and replacing with simple interface embedding. The following provides the same functionality as gomock:

type myMock struct {
  MockedInterface
}

// Override will intercept override.
func (m *myMock) Override(...) { }

Wish I could have warned you earlier.

Group updater: add support for parallelism

Addresses this TODO in rollingupdate.go:

// TODO(wfarner): Make the 'batch size' configurable.

We should accept a parallelism flag for updates to allow multiple instances to be drained simultaneously.

Create a component that will synthesize machine templates

The component must receive a template schema (perhaps interface{}), and a YAML document to marshal into the schema. It should additional receive a second YAML document to overlay onto the first. In this case, the first document represents the template and the second defines overrides.

Some thoughts on implementation:

  • Unknown schema fields must be rejected with an error
  • The schema must have a means of defining optional and required fields
  • Unset required fields must be rejected with an error

Systematic logging support

Right now logging is pretty ad-hoc. Other than using logrus here and there, there is no pattern or systematic support for provisioner developers. docker-machine exposes logger for driver developers to use and libmachete should as well.

Support for creating and managing infrastructure for swarms

This is the top priority for supporting Cloud. Also seeking convergence with Docker4X (Editions). The requirements are

  • Ensure a Swarm-compatible networking environment exists
    • On IaaS providers, make sure subnets exist with the proper firewall rules so that Swarm manager and agent nodes can be launched. If no existing networks are specified by user, provision new networks:
      • On AWS this can be running a pre-configured CFN template (this is the approach by Docker4AWS)
      • On Azure this can be running an Azure Resource Management template.
      • On platforms that do not support resource templates / scripting like CFN or ARM templates, this will be accomplished via API calls and host configurations (eg. setting up ufw on hosts etc)
  • Launch Swarm
    • Launch Manager nodes -- on AWS / Azure this is done via supported machine images and userdata on launched instances to execute swarm init and swarm join for managers. On other platforms where there are no Docker-provided machine images, we must install, configure and launch the latest Docker Engines for this purpose.
    • Launch agent / worker nodes - same approach as manager nodes, but with different instance initialization to join the worker node to the swarm.
  • Auto-scale the swarm
    • In general, on each platform use the best practice - AWS - autoscaling group and Azure availability set
    • On other platforms like Packet and Digital Ocean this can mean active processes / nodes that can emulate the functionality to autoscaling group. Initially this can just be maintaining a constant instance count.
  • Swarm upgrade
    • Upgrades the manager nodes and agent nodes. This entails stopping / upgrading / starting manager nodes, draining worker nodes, upgrading and rejoining the agent nodes.
    • See AWS implementation using DynamoDB for coordination and expected behavior, which may have to be emulated on other platforms.
  • Load balancer integration
    • On supported platforms, integrate with native ELB solutions -- ELB on AWS, ALB on Azure.
    • Initiallly L4 routing (port to service published ports), L7, L4/L7 LB hierarchy in future releases
    • Emulate similar on other platforms possibly using nginx, haproxy.

e2e/aws sources are not validated by CI

CI doesn't necessarily need to run end to end tests (yet), but we should at least have it compile and lint to make sure those sources are not completely broken

Group updater: Flavor changes don't register as Instance changes

TODO from types.go:

// TODO(wfarner): This does not consider flavor plugin and properties. At present, since details like group
// size and logical IDs are extracted from opaque properties, there's no way to distinguish between a group size
// change and a change requiring a rolling update.

Improve UX with frontend for managing multiple groups and other resource types

Currently the CLI is very developer-facing and a free-for-all in that it reflects all the http endpoint for all the plugin types. It's handy for developers to develop and test the building blocks but confusing to end users / ops.

The Issue here tracks the design and development of a separate frontend (and likely a separate CLI) that will provide a unified UX that will be much better suited for end-users of this toolkit and not just for developers.

Features

  • Support configurations for multiple groups
  • Support configurations for resource types
  • Use file system for organizing namespaces for different groups and plugins. All configuration JSONs will be placed in appropriate locations in the filesystem. This means no more stdin via the cli tool.

This "frontend" will likely to provide means to simplify the start up for plugins -- by examining the local filesystem holding the configs and determine the plugins that need to be run. Ability to do this will be environment specific (e.g. running the plugins as unix daemons or as docker containers or whatever), so a base implementation and mechanisms for naming/ typing of plugins will have to be worked out as part of this.

Credentials manager

We need a subsystem to manage credentials (e.g. AWS access key/secret, Azure OAuth tokens). This subsystem will allow the user to

  • Add provisioner-specific credentials
  • Remove credential
  • List existing credentials
  • Reference the credential by a handle when performing operations that require provisioner API credentials (eg. AWS create instance).

The storage of credentials will be separate from other system states such as inventory, SSH keys and TLS certs. This separation will make it possible for the use cases of backing up or sharing states without also backing up / sharing API secrets.

For example, the following operations should be possible:

$ machete credentials add aws production --access_key=...
$ machete credentials add aws production-payments-team --access_key=...
$ machete credentials add azure development --oauth_token_file=...
$ machete credentials list
production
production-payments-team
development

$machete create web1 --cred=production -instanceType=.....
$machete create batchHost1 --cred=production-payments -instanceType=.....

Use json-RPC for inter-plugin communication

Our scaffolding for communication already resembles the APIs to net/rpc, which can be adapted to JSON-RPC with net/rpc/jsonrpc. The switch would allow us to offload some custom code and move to a standard wire format as well.

Also weigh net/rpc/jsonrpc against other libraries like github.com/gorilla/rpc/json.

Make fails intermittently

$ make ci
+ fmt
+ vet
+ lint
+ vendor-sync
# cd .; git clone https://go.googlesource.com/net /Users/aluther/code/go/.cache/govendor/golang.org/x/net
Cloning into '/Users/aluther/code/go/.cache/govendor/golang.org/x/net'...
error: RPC failed; curl 56 SSLRead() return error -36
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
Error: Remotes failed for:
    Failed for "golang.org/x/net/context" (failed to clone repo): exit status 128

make: *** [vendor-sync] Error 2

Any ideas what could be wrong here?

Terraform plugin having trouble calling Terraform

Hi all,

In running through the Terraform demo, I'm having trouble when I set InfraKit to start watching the group:

In window01 I launch the terraform plugin:

$ infrakit/terraform --log 5 --dir $(pwd)/example/instance/terraform/aws-two-tier/
INFO[0000] Starting plugin
INFO[0000] Listening on: unix:///run/infrakit/plugins/instance-terraform.sock
DEBU[0000] terraform instance plugin. dir= /Users/jesse/Code/learn/dkr_infrakit/src/github.com/docker/infrakit/example/instance/terraform/aws-two-tier/
INFO[0000] listener protocol= unix addr= /run/infrakit/plugins/instance-terraform.sock err= <nil>

Once I try to set InfraKit to watch that group:

$ infrakit/cli group watch example/instance/terraform/aws-two-tier/group.json

I get a bunch of errors in the original window01 that makes it seems as though InfraKit can't find the Terraform executable.

INFO[0322] Acquired lock.  Applying
INFO[0322] Can't acquire lock.  Wait.
INFO[0322] Error: unknown command "apply" for "terraform"
  terraform=apply
INFO[0322] Run 'terraform --help' for usage.
            terraform=apply
INFO[0322] time="2016-10-06T14:01:31-04:00" level=error msg="unknown command \"apply\" for \"terraform\""
  terraform=apply
INFO[0322] Acquired lock.  Applying
INFO[0322] Can't acquire lock.  Wait.
INFO[0322] Error: unknown command "apply" for "terraform"
  terraform=apply
INFO[0322] Run 'terraform --help' for usage.
            terraform=apply
INFO[0322] time="2016-10-06T14:01:31-04:00" level=error msg="unknown command \"apply\" for \"terraform\""
  terraform=apply
INFO[0322] Acquired lock.  Applying
INFO[0322] Can't acquire lock.  Wait.
INFO[0322] Error: unknown command "apply" for "terraform"
  terraform=apply
INFO[0322] Run 'terraform --help' for usage.

Running terraform outside of the directory gives me this the expected program output:

$ terraform
usage: terraform [--version] [--help] <command> [args]

The available commands for execution are listed below.
The most common, useful commands are shown first, followed by
less common or more advanced commands. If you're just getting
started with Terraform, stick with the common commands. For the
other commands, please read the help and docs before usage.
...

While running it in the Go program directory starts the unix sock:

WOPR:infrakit jesse$ terraform
INFO[0000] Starting plugin
INFO[0000] Listening on: unix:///run/infrakit/plugins/instance-terraform.sock

Unsure of how to tell if InfraKit is calling Terraform appropriately, or how to set it. This is on Mac OSX Yosemite.

Why the dependency not get met???

I haven't had reseach enough for this ..and need some explanation ..you people put up something in the doc and and failed to tell us the dependecy.Probably it skips your tierd mind.

bhaskar@Fedora_07:14:39_Fri Oct 07:~/git-linux/infrakit>make ci
+ fmt
+ vet
can't load package: package _/home/bhaskar/git-linux/infrakit/cmd/cli: cannot find package "_/home/bhaskar/git-linux/infrakit/cmd/cli" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/cmd/cli (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/cmd/cli (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/cmd/group: cannot find package "_/home/bhaskar/git-linux/infrakit/cmd/group" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/cmd/group (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/cmd/group (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/discovery: cannot find package "_/home/bhaskar/git-linux/infrakit/discovery" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/discovery (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/discovery (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/example/flavor/swarm: cannot find package "_/home/bhaskar/git-linux/infrakit/example/flavor/swarm" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/example/flavor/swarm (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/example/flavor/swarm (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/example/flavor/vanilla: cannot find package "_/home/bhaskar/git-linux/infrakit/example/flavor/vanilla" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/example/flavor/vanilla (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/example/flavor/vanilla (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/example/flavor/zookeeper: cannot find package "_/home/bhaskar/git-linux/infrakit/example/flavor/zookeeper" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/example/flavor/zookeeper (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/example/flavor/zookeeper (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/example/instance/file: cannot find package "_/home/bhaskar/git-linux/infrakit/example/instance/file" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/example/instance/file (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/example/instance/file (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/example/instance/terraform: cannot find package "_/home/bhaskar/git-linux/infrakit/example/instance/terraform" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/example/instance/terraform (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/example/instance/terraform (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/example/instance/vagrant: cannot find package "_/home/bhaskar/git-linux/infrakit/example/instance/vagrant" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/example/instance/vagrant (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/example/instance/vagrant (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/plugin: cannot find package "_/home/bhaskar/git-linux/infrakit/plugin" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/plugin (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/plugin (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/plugin/flavor/swarm: cannot find package "_/home/bhaskar/git-linux/infrakit/plugin/flavor/swarm" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/plugin/flavor/swarm (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/plugin/flavor/swarm (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/plugin/flavor/vanilla: cannot find package "_/home/bhaskar/git-linux/infrakit/plugin/flavor/vanilla" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/plugin/flavor/vanilla (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/plugin/flavor/vanilla (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/plugin/flavor/zookeeper: cannot find package "_/home/bhaskar/git-linux/infrakit/plugin/flavor/zookeeper" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/plugin/flavor/zookeeper (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/plugin/flavor/zookeeper (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/plugin/group: cannot find package "_/home/bhaskar/git-linux/infrakit/plugin/group" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/plugin/group (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/plugin/group (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/plugin/group/types: cannot find package "_/home/bhaskar/git-linux/infrakit/plugin/group/types" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/plugin/group/types (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/plugin/group/types (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/plugin/group/util: cannot find package "_/home/bhaskar/git-linux/infrakit/plugin/group/util" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/plugin/group/util (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/plugin/group/util (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/plugin/instance/vagrant: cannot find package "_/home/bhaskar/git-linux/infrakit/plugin/instance/vagrant" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/plugin/instance/vagrant (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/plugin/instance/vagrant (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/plugin/util: cannot find package "_/home/bhaskar/git-linux/infrakit/plugin/util" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/plugin/util (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/plugin/util (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/spi/flavor: cannot find package "_/home/bhaskar/git-linux/infrakit/spi/flavor" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/spi/flavor (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/spi/flavor (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/spi/group: cannot find package "_/home/bhaskar/git-linux/infrakit/spi/group" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/spi/group (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/spi/group (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/spi/http/flavor: cannot find package "_/home/bhaskar/git-linux/infrakit/spi/http/flavor" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/spi/http/flavor (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/spi/http/flavor (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/spi/http/group: cannot find package "_/home/bhaskar/git-linux/infrakit/spi/http/group" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/spi/http/group (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/spi/http/group (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/spi/http/instance: cannot find package "_/home/bhaskar/git-linux/infrakit/spi/http/instance" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/spi/http/instance (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/spi/http/instance (from $GOPATH)
can't load package: package _/home/bhaskar/git-linux/infrakit/spi/instance: cannot find package "_/home/bhaskar/git-linux/infrakit/spi/instance" in any of:
        /usr/lib/golang/src/_/home/bhaskar/git-linux/infrakit/spi/instance (from $GOROOT)
        /home/bhaskar/go/src/_/home/bhaskar/git-linux/infrakit/spi/instance (from $GOPATH)
Makefile:29: recipe for target 'vet' failed
make: *** [vet] Error 1

`infrakit group watch` hangs when no arguments are provided

This command hangs, which is confusing:

$ build/infrakit group watch

I suspect this is an artifact of accepting input via stdin. Perhaps we should eliminate support for configurations via stdin, or do so only by a non-default use of the command.

Build binaries in a docker container

Some users have come across issues when building infrakit. We can improve the build UX by doing the build inside docker containers.

Some things to consider...

  • Support cross compilation. Linux is the default target when building but the user's local dev platform is likely not Linux, so
  • Support GOOS=darwin or whatever the user's local development platform is. This is necessary since we don't require docker to be runtime.

This gets into the topic of whether we should provide Docker containers to simplify experimentation and running of the binaries. It's outside the scope of this Issue unless people feel otherwise. Please comment here.

Implement Engine install and configuration task after machine is created.

We need to implement a consistent way to install the Engine on a host after it's been created. This is the last stage of the processing pipeline that looks like:

  1. Use existing or generate new SSH key for a node
  2. Create the node using provisioner API and a specified template with optionally provided overrides
  3. Install and configure Docker engine on that host
    • Using the login credentials from 1.
    • Possibly different Engine configurations optimized for each platform node / OS version
  4. The Engine is now ready and can be added to a Swarm cluster (TBD later)

Note

  • We will likely not write new Engine installation scripts. Just invoke the existing Docker installation scripts ==> will look at how Machine does this.
  • We can use the same template pattern for engine configurations: e.g. engine_ubuntu14_04 or engine_alpine
  • The engine configuration template can be grouped / bundled along with the machine templates so that we have proper association of recommended engine configuration with any Edition-specific params like AMI / image id, OS version, instance types. ==> does this mean we should group configuration templates in folders or via some naming convention of the file names?

make -k infrakit complaining about kardianos/govendor even if installed already

Following the README on the project page, I did a go get for "kardianos/govendor" and "golang/lint/golint".
After doing that when I am running "make -k infrakit", I am getting the below error:
Makefile:88: *** Please install govendor: go get github.com/kardianos/govendor. Stop.

Also, doing "make ci" throws the error:
Makefile:42: *** Please install golint: go get -u github.com/golang/lint/golint. Stop.

Go version: go-1.7.1
OS: macOS 10.12

Kindly look into this problem or suggest if I am missing something.

Thanks,
Gaurav

Updater/SPI: add support for draining instances

Addresses this TODO in rollingupdate.go:

// TODO(wfarner): Provide a mechanism to gracefully drain instances.

Draining would be a nice addition to make updates more graceful. Command line tooling would also be nice to perform manual service drains.

Manage Provider API Credentials and implement Auth Provider interfaces

Currently in Docker Machine API credentials are managed by the drivers themselves and they work differently from driver to driver:

  • AWS - either a aws credential file including the access id and secret key or command line flags or environment variables.
  • Azure - a more complicated JSON document as a serialized the azure.SecurityPrincipalToken object which contains OAuth token, expiry, service, type and refresh token, is stored on disk. This token also expires frequently and requires automatic refreshes. Current Docker Machine triggers a desktop device flow which is not suitable for running as a daemon.
  • DO - a user provided via flag, access token that is static (unless revoked).
  • Packet - similar to DO access token (required by Cloud by not in Docker Machine currently)
  • GCE - OAuth similar to Azure.

So different provisioners expect different input and we could consider a design where credentials are stored in an yaml file that is referenced by the user, along with the provisioner (provider/driver) name, to authenticate. For example, for AWS, suppose we need

  • Region
  • Access Id
  • Secret key
    These can be in a yaml file called myaws and it can look like:
region: us-east
access_id: xxxx
secret_key: yyyy

and the command line could be

docker node create -driver=aws -auth=myaws -template=medium-appserver -count=10

It is then the responsibility of the AWS provisioner to parse the file and perform the necessary auth against the AWS API.

Same would apply to Azure, except the Azure provisioner would expect a yaml of different format and can load that and perform API auth. The provisioner obviously would have to provide helpful error if the specified auth file isn't what it expects.

Consider allowing state to be stored in S3

Local machine state presents a challenge for users in terms of durability and consistency. It shouldn't be too difficult to implement a backing store that persists all writes to S3 for users that wish to enable it.

./example/flavor/swarm needs documenting

A README in the style of the tutorial would be very nice.

Extra points for tying it into the Terraform example, showing how to deploy a swarm onto some terraformed instances.

Comment about PluginName is incorrect

In InfraKit, plugin names don't relate to Docker Hub or registry in any way, so these comments need to be revised. We may even consider removing the PluginName fields altogether as they're not currently used in any meaningful way.

$ ag 'Docker Hub'
cmd/group/main.go
24:     // PluginName is the name of the plugin in the Docker Hub / registry

example/flavor/swarm/main.go
17:     // PluginName is the name of the plugin in the Docker Hub / registry

example/flavor/vanilla/main.go
16:     // PluginName is the name of the plugin in the Docker Hub / registry

example/flavor/zookeeper/main.go
16:     // PluginName is the name of the plugin in the Docker Hub / registry

example/instance/file/main.go
15:     // PluginName is the name of the plugin in the Docker Hub / registry

example/instance/terraform/main.go
16:     // PluginName is the name of the plugin in the Docker Hub / registry

example/instance/vagrant/main.go
16:     // PluginName is the name of the plugin in the Docker Hub / registry

Not enough Danny Trejo in repo

As you may know Danny Trejo starred as the character Machete in a few Robert Rodriguez films as well as in the 2001 classic, "Spy Kids".

Consider supporting 'remote' plugins via TCP

Spurring off from #214 which shed support for TCP in the plugin server and client libraries, we have had several requests for supporting communicating with plugins via TCP. The use case revolves around avoiding a separate body of code and service (the InfraKit plugin) to adapt a system to support InfraKit. For example, if AWS supported an InfraKit API, it would be a boon if users could interact directly with it without the need for a locally-running plugin.

Define and implement a storage interface

The storage interface and implementation can be primitive for now, possibly as little as an association from a machine ID and basic attributes.

@chungers suggests implementing the store on libkv as it is a basic API and there is a reasonable future where it is used as the longer-term distributed store API.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.