Code Monkey home page Code Monkey logo

libcompose's Introduction

⚠️ Deprecation Notice: This project and repository is now deprecated and is no longer under active development. Please use compose-go instead.

libcompose

GoDoc Build Status

A Go library for Docker Compose. It does everything the command-line tool does, but from within Go -- read Compose files, start them, scale them, etc.

Note: This is not really maintained anymore — the reason are diverse but mainly lack of time from the maintainers

The current state is the following :

  • The libcompose CLI should considered abandonned. The v2 parsing is incomplete and v3 parsing is missing.
  • The official compose Go parser implementation is on docker/cli but only support v3 version of the compose format.

What is the work that is needed:

  • Remove the cli code (thus removing dependencies to docker/cli )
  • Clearer separation of packages : parsing, conversion (to docker api or swarm api), execution (Up, Down, … behaviors)
  • Add support for all compose format version (v1, v2.x, v3.x)
  • Switch to either golang/dep or go mod for dependencies (removing the vendor folder)
  • (bonus) extract the docker/cli code here and vendor this library into docker/cli.

If you are interested to work on libcompose, feel free to ping me (over twitter @vdemeest), I'll definitely do code reviews and help as much as I can 😉.

Note: This is experimental and not intended to replace the Docker Compose command-line tool. If you're looking to use Compose, head over to the Compose installation instructions to get started with it.

Here is a list of known project that uses libcompose:

Usage

package main

import (
	"log"

	"golang.org/x/net/context"

	"github.com/docker/libcompose/docker"
	"github.com/docker/libcompose/docker/ctx"
	"github.com/docker/libcompose/project"
	"github.com/docker/libcompose/project/options"
)

func main() {
	project, err := docker.NewProject(&ctx.Context{
		Context: project.Context{
			ComposeFiles: []string{"docker-compose.yml"},
			ProjectName:  "my-compose",
		},
	}, nil)

	if err != nil {
		log.Fatal(err)
	}

	err = project.Up(context.Background(), options.Up{})

	if err != nil {
		log.Fatal(err)
	}
}

Building

You need either Docker and make, or go in order to build libcompose.

Building with docker

You need Docker and make and then run the binary target. This will create binary for all platform in the bundles folder.

$ make binary
docker build -t "libcompose-dev:refactor-makefile" .
# […]
---> Making bundle: binary (in .)
Number of parallel builds: 4

-->      darwin/386: github.com/docker/libcompose/cli/main
-->    darwin/amd64: github.com/docker/libcompose/cli/main
-->       linux/386: github.com/docker/libcompose/cli/main
-->     linux/amd64: github.com/docker/libcompose/cli/main
-->       linux/arm: github.com/docker/libcompose/cli/main
-->     windows/386: github.com/docker/libcompose/cli/main
-->   windows/amd64: github.com/docker/libcompose/cli/main

$ ls bundles
libcompose-cli_darwin-386*    libcompose-cli_linux-amd64*      libcompose-cli_windows-amd64.exe*
libcompose-cli_darwin-amd64*  libcompose-cli_linux-arm*
libcompose-cli_linux-386*     libcompose-cli_windows-386.exe*

Building with go

  • You need go v1.11 or greater
  • you need to set export GO111MODULE=on environment variable
  • If your working copy is not in your GOPATH, you need to set it accordingly.
$ go generate
# Generate some stuff
$ go build -o libcompose ./cli/main

Running

A partial implementation of the libcompose-cli CLI is also implemented in Go. The primary purpose of this code is so one can easily test the behavior of libcompose.

Run one of these:

libcompose-cli_darwin-386
libcompose-cli_linux-amd64
libcompose-cli_windows-amd64.exe
libcompose-cli_darwin-amd64
libcompose-cli_linux-arm
libcompose-cli_linux-386
libcompose-cli_windows-386.exe

Tests (unit & integration)

You can run unit tests using the test-unit target and the integration test using the test-integration target. If you don't use Docker and make to build libcompose, you can use go test and the following scripts : hack/test-unit and hack/test-integration.

$ make test-unit
docker build -t "libcompose-dev:refactor-makefile" .
#[…]
---> Making bundle: test-unit (in .)
+ go test -cover -coverprofile=cover.out ./docker
ok      github.com/docker/libcompose/docker     0.019s  coverage: 4.6% of statements
+ go test -cover -coverprofile=cover.out ./project
ok      github.com/docker/libcompose/project    0.010s  coverage: 8.4% of statements
+ go test -cover -coverprofile=cover.out ./version
ok      github.com/docker/libcompose/version    0.002s  coverage: 0.0% of statements

Test success

Current status

The project is still being kickstarted... But it does a lot. Please try it out and help us find bugs.

Contributing

Want to hack on libcompose? Docker's contributions guidelines apply.

If you have comments, questions, or want to use your knowledge to help other, come join the conversation on IRC. You can reach us at #libcompose on Freenode.

libcompose's People

Contributors

aanand avatar beornf avatar bfosberry avatar brancz avatar cpuid avatar dansteen avatar dnephin avatar dtan4 avatar dustinrc avatar gdevillele avatar gitlawr avatar halfa avatar ibuildthecloud avatar imikushin avatar joshwget avatar jritsema avatar kunalkushwaha avatar ldez avatar lox avatar mikedougherty avatar mrajashree avatar rheinwein avatar shin- avatar surajnarwade avatar thajeztah avatar vdemeester avatar vito-c avatar xihan88 avatar yudai avatar yuexiao-wang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libcompose's Issues

Proposal: ``libcompose-cli service`` commands (Experimental)

Background

docker compose/libcompose helps in managing multi-container apps. But for managing such apps, user need to know, config of each app and its location.
Whereas IMHO, once application instance is created, it belongs to Docker System. So it management should be independent of location of config file, which was used for its creation.

Current Solutions

Solutions like UCP, Kitematic and other third party GUI (Mostly) solution provides solution for working on services.
But none of them works for command line users.

Why libcompose?

Since, libcompose is targeted as library for third party solutions, implementing such feature ensures, workflow and compatibility of service management consistent across all.

Proposal.

With libcompose-cli create/up/scale commands, yml configuration of services, yml configuration of services can be copied to compose-repo , with few extra attributes

  • libcompose-cli rm command will remove the yml configuration of removed services.

compose-repo: (default) folder on local file-system (~/.compose/), configurable in ~/.compose.yml .

  • A plugin based storage interface enable to store config on cloud / git/ consul etc

New commands

  • libcompose-cli service ls : List all services in system and its state (Running/Stopped/Down)
  • libcompose-cli service show [service-name]: Show details of service. Output similar to libcompose ps
  • libcompose-cli service stop/rm/start/ etc : These commands parallel to libcompose stop/rm/start etc.
  • libcompose-cli service history: While application life cycle, on events of changes like ports/ network/ storage/scale can be recorded in history files. Such data can be profiled here and shown.
    • could be helpful in troubleshooting.
  • libcompose-cli service version: [Need more thoughts] This could be a new feature itself, where multiple versions of one application can be created. So new version can be deployed and one error, rollback.

Propose using https://github.com/fsouza/go-dockerclient

We currently use https://github.com/samalba/dockerclient to talk to Docker. I propose we switch to https://github.com/fsouza/go-dockerclient. The advantage of https://github.com/fsouza/go-dockerclient is that it has full coverage of the Docker API and is written in a style that is forward compatible. The @fsouza client seems more active from a larger community perspective where as the samalba client is largely contributed and maintained by maintainers of other Docker projects. While samalba client is used in other projects like swarm (and machine?) the usage of the Docker API seems to be quite a small subset. Furthermore I've found @fsouza to be very responsive to merging PRs where as the samalba client does seem to take much longer.

Error when referring to libcompose as a dependency

I am currently integrating libcompose as a dependency in my project and am getting the following error.

Importing:

import (
    "github.com/docker/libcompose/docker"
    "github.com/docker/libcompose/project"
)

Updating dependencies via go get:

go get ./...

Error:

package github.com/docker/docker/autogen/dockerversion: cannot find package "github.com/docker/docker/autogen/dockerversion" in any of:
    /usr/local/Cellar/go/1.5.1/libexec/src/github.com/docker/docker/autogen/dockerversion (from $GOROOT)
    /Users/junruh/Development/golang/src/github.com/docker/docker/autogen/dockerversion (from $GOPATH)

I get the same issues using the example within the README. Any suggestions on how to get around this?

go get fails

I'm not sure how to correctly vendor this repo:

➜ mkdir gopath-example
➜ cd gopath-example
➜ export GOPATH=`pwd`
➜ go get -d github.com/docker/libcompose/docker
package github.com/docker/docker/autogen/dockerversion: cannot find package "github.com/docker/docker/autogen/dockerversion" in any of:
    /usr/local/Cellar/go/1.5/libexec/src/github.com/docker/docker/autogen/dockerversion (from $GOROOT)
    /Users/alp/workspace/gopath-example/src/github.com/docker/docker/autogen/dockerversion

I can see the pkg is here https://github.com/docker/libcompose/tree/master/Godeps/_workspace/src/github.com/docker/docker/autogen/dockerversion but not sure why it is hitting github for the dependencies.

Sorry in advance if it's some obvious thing about godeps but it seemed like this should be working.

nil panic error

I get this error when building custom ServiceConfigs, either there is an internal bug, or something I'm configuring in my service config is not valid.

Any ideas?

INFO[0000] Building jet--_app...                        
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x20 pc=0x5cc961]

goroutine 22 [running]:
encoding/json.(*Decoder).refill(0xc8203564e0, 0x0, 0x0)
    /usr/local/go/src/encoding/json/stream.go:152 +0x281
encoding/json.(*Decoder).readValue(0xc8203564e0, 0x1, 0x0, 0x0)
    /usr/local/go/src/encoding/json/stream.go:128 +0x41b
encoding/json.(*Decoder).Decode(0xc8203564e0, 0xa821c0, 0xc82036a090, 0x0, 0x0)
    /usr/local/go/src/encoding/json/stream.go:57 +0x159
github.com/docker/docker/pkg/jsonmessage.DisplayJSONMessagesStream(0x0, 0x0, 0x7f5862e84428, 0xc82002e010, 0x1, 0x1, 0x0, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/docker/pkg/jsonmessage/jsonmessage.go:161 +0x19e
github.com/docker/libcompose/docker.(*DaemonBuilder).Build(0xc8202fa388, 0xc820404620, 0x9, 0xc820301f80, 0x7f5860e00fd8, 0xc8203fa920, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/libcompose/docker/builder.go:83 +0x8f7
github.com/docker/libcompose/docker.(*Service).build(0xc8203fa920, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/libcompose/docker/service.go:143 +0x130
github.com/docker/libcompose/docker.(*Service).Build(0xc8203fa920, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/libcompose/docker/service.go:135 +0x4f
github.com/docker/libcompose/project.(*Project).Build.func1.1(0x7f5860e00fd8, 0xc8203fa920, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/libcompose/project/project.go:172 +0x33
github.com/docker/libcompose/project.(*serviceWrapper).Do(0xc8203f3340, 0xc8203c79b0, 0x15, 0x16, 0xcf2508)
    /home/bfosberry/.go/src/github.com/docker/libcompose/project/service-wrapper.go:100 +0x1f6
github.com/docker/libcompose/project.(*Project).Build.func1(0xc8203f3340, 0xc8203c79b0)
    /home/bfosberry/.go/src/github.com/docker/libcompose/project/project.go:173 +0x49
created by github.com/docker/libcompose/project.(*Project).startService
    /home/bfosberry/.go/src/github.com/docker/libcompose/project/project.go:378 +0x581

goroutine 1 [runnable]:
sync.runtime_Semacquire(0xc8203f336c)
    /usr/local/go/src/runtime/sema.go:43 +0x26
sync.(*WaitGroup).Wait(0xc8203f3360)
    /usr/local/go/src/sync/waitgroup.go:126 +0xb4
github.com/docker/libcompose/project.(*serviceWrapper).Wait(0xc8203f3340, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/libcompose/project/service-wrapper.go:112 +0x39
github.com/docker/libcompose/project.(*Project).traverse(0xc820301f80, 0xc820163401, 0xc8200eb500, 0xc8203c79b0, 0xcf2510, 0x0, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/libcompose/project/project.go:425 +0x88c
github.com/docker/libcompose/project.(*Project).forEach(0xc820301f80, 0xc8200eb848, 0x1, 0x1, 0xcf2510, 0x0, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/libcompose/project/project.go:332 +0x21e
github.com/docker/libcompose/project.(*Project).perform(0xc820301f80, 0x2b, 0x2c, 0xc8200eb848, 0x1, 0x1, 0xcf2510, 0x0, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/libcompose/project/project.go:314 +0x9f
github.com/docker/libcompose/project.(*Project).Build(0xc820301f80, 0xc8200eb848, 0x1, 0x1, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/libcompose/project/project.go:174 +0x79
github.com/codeship/jet/lib/compose.(*client).Load(0xc8203ca730, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc8200eb848, ...)
    /home/bfosberry/.go/src/github.com/codeship/jet/lib/compose/client.go:93 +0x1af
github.com/codeship/jet/lib/compose.(*client).Run(0xc8203ca730, 0x0, 0x0, 0x7fff198ae239, 0x3, 0x7fff198ae23d, 0x8, 0x0, 0x0, 0x0, ...)
    /home/bfosberry/.go/src/github.com/codeship/jet/lib/compose/client.go:51 +0x146
github.com/codeship/jet/lib/jet.(*client).Run.func1(0x7f5860e00aa0, 0xc8203ca730, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/codeship/jet/lib/jet/client.go:101 +0xfe
github.com/codeship/jet/lib/jet.(*client).dockerClientCall.func2(0x0, 0x0)
    /home/bfosberry/.go/src/github.com/codeship/jet/lib/jet/client.go:193 +0x49
github.com/codeship/jet/lib/jet.(*client).tryHandleUserError(0xc8203c3030, 0xc8200eba48, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/codeship/jet/lib/jet/client.go:226 +0x2d
github.com/codeship/jet/lib/jet.(*client).dockerClientCall(0xc8203c3030, 0xc8203d1100, 0xc820395d20, 0xc8200ebae8, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/codeship/jet/lib/jet/client.go:193 +0x134
github.com/codeship/jet/lib/jet.(*client).Run(0xc8203c3030, 0x7fff198ae239, 0x3, 0x7fff198ae23d, 0x8, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
    /home/bfosberry/.go/src/github.com/codeship/jet/lib/jet/client.go:102 +0x247
main.main.func6(0xc820357040, 0xc82035ffb0, 0x2, 0x3)
    /home/bfosberry/.go/src/github.com/codeship/jet/cmd/jet/main.go:147 +0x3a1
github.com/spf13/cobra.(*Command).execute(0xc820357040, 0xc82035ff20, 0x3, 0x3, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/spf13/cobra/command.go:495 +0x6e3
github.com/spf13/cobra.(*Command).Execute(0xc8203576c0, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/spf13/cobra/command.go:560 +0x180
main.main()
    /home/bfosberry/.go/src/github.com/codeship/jet/cmd/jet/main.go:217 +0xfa3

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:1721 +0x1

goroutine 5 [syscall]:
os/signal.loop()
    /usr/local/go/src/os/signal/signal_unix.go:22 +0x18
created by os/signal.init.1
    /usr/local/go/src/os/signal/signal_unix.go:28 +0x37

goroutine 18 [select, locked to thread]:
runtime.gopark(0xcf3228, 0xc82002cf28, 0xbabe60, 0x6, 0x433918, 0x2)
    /usr/local/go/src/runtime/proc.go:185 +0x163
runtime.selectgoImpl(0xc82002cf28, 0x0, 0x18)
    /usr/local/go/src/runtime/select.go:392 +0xa64
runtime.selectgo(0xc82002cf28)
    /usr/local/go/src/runtime/select.go:212 +0x12
runtime.ensureSigM.func1()
    /usr/local/go/src/runtime/signal1_unix.go:227 +0x353
runtime.goexit()
    /usr/local/go/src/runtime/asm_amd64.s:1721 +0x1

goroutine 20 [chan receive]:
github.com/docker/libcompose/project.(*defaultListener).start(0xc8203fa020)
    /home/bfosberry/.go/src/github.com/docker/libcompose/project/listener.go:47 +0x9e
created by github.com/docker/libcompose/project.NewDefaultListener
    /home/bfosberry/.go/src/github.com/docker/libcompose/project/listener.go:42 +0xb5

goroutine 28 [runnable]:
archive/tar.isASCII(0xc820405e00, 0xb, 0xb)
    /usr/local/go/src/archive/tar/common.go:310 +0x4b
archive/tar.(*Writer).cString(0xc820350480, 0xc82035053b, 0xc, 0x178, 0xc820405e00, 0xb, 0x0, 0x0, 0x0, 0x0)
    /usr/local/go/src/archive/tar/writer.go:74 +0x59
archive/tar.(*Writer).octal(0xc820350480, 0xc82035053b, 0xc, 0x178, 0x5678b40f)
    /usr/local/go/src/archive/tar/writer.go:99 +0xea
archive/tar.(*Writer).numeric(0xc820350480, 0xc82035053b, 0xc, 0x178, 0x5678b40f, 0x0, 0x0, 0x0, 0x0)
    /usr/local/go/src/archive/tar/writer.go:108 +0x89
archive/tar.(*Writer).writeHeader(0xc820350480, 0xc8203e1e10, 0xc82012ab01, 0x0, 0x0)
    /usr/local/go/src/archive/tar/writer.go:194 +0xbd9
archive/tar.(*Writer).WriteHeader(0xc820350480, 0xc8203e1e10, 0x0, 0x0)
    /usr/local/go/src/archive/tar/writer.go:139 +0x3c
github.com/docker/docker/pkg/archive.(*tarAppender).addTarFile(0xc8205cfdd0, 0xc820405290, 0xe, 0xc820405c40, 0xf, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/docker/pkg/archive/archive.go:326 +0x8e6
github.com/docker/docker/pkg/archive.TarWithOptions.func1.2(0xc820405290, 0xe, 0x7f5862e88b88, 0xc82059f900, 0x0, 0x0, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/docker/pkg/archive/archive.go:609 +0x6f1
path/filepath.walk(0xc820405290, 0xe, 0x7f5862e88b88, 0xc82059f900, 0xc8205cfe18, 0x0, 0x0)
    /usr/local/go/src/path/filepath/path.go:349 +0x80
path/filepath.walk(0xc8205a36e0, 0x9, 0x7f5862e88b88, 0xc82059f540, 0xc8205cfe18, 0x0, 0x0)
    /usr/local/go/src/path/filepath/path.go:374 +0x4fc
path/filepath.walk(0xc8205a29e0, 0x4, 0x7f5862e88b88, 0xc82059e280, 0xc8205cfe18, 0x0, 0x0)
    /usr/local/go/src/path/filepath/path.go:374 +0x4fc
path/filepath.walk(0xc8205a2010, 0x4, 0x7f5862e88b88, 0xc82059e0a0, 0xc8205cfe18, 0x0, 0x0)
    /usr/local/go/src/path/filepath/path.go:374 +0x4fc
path/filepath.Walk(0xc8205a2010, 0x4, 0xc8205cfe18, 0x0, 0x0)
    /usr/local/go/src/path/filepath/path.go:396 +0xe1
github.com/docker/docker/pkg/archive.TarWithOptions.func1(0x7f5860e0d928, 0xc820142fe0, 0xc82046c6c0, 0xc82002fee0, 0xc820151d60, 0xc820142fc0, 0x2, 0x2, 0xc8201354d0, 0x2, ...)
    /home/bfosberry/.go/src/github.com/docker/docker/pkg/archive/archive.go:613 +0x882
created by github.com/docker/docker/pkg/archive.TarWithOptions
    /home/bfosberry/.go/src/github.com/docker/docker/pkg/archive/archive.go:615 +0x45d

goroutine 34 [runnable]:
github.com/docker/engine-api/client/transport/cancellable.Do.func3(0x7f5860e0da00, 0xc820294e60, 0xc82016a270, 0xc8203d4060)
    /home/bfosberry/.go/src/github.com/docker/engine-api/client/transport/cancellable/cancellable.go:77
created by github.com/docker/engine-api/client/transport/cancellable.Do
    /home/bfosberry/.go/src/github.com/docker/engine-api/client/transport/cancellable/cancellable.go:84 +0x335

goroutine 32 [runnable]:
sync.runtime_Syncsemacquire(0xc8205ed9c0)
    /usr/local/go/src/runtime/sema.go:237 +0x201
sync.(*Cond).Wait(0xc8205ed9b0)
    /usr/local/go/src/sync/cond.go:62 +0x9b
io.(*pipe).read(0xc8205ed980, 0xc820274000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /usr/local/go/src/io/pipe.go:52 +0x2d2
io.(*PipeReader).Read(0xc82002fed8, 0xc820274000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
    /usr/local/go/src/io/pipe.go:134 +0x50
github.com/docker/docker/pkg/progress.(*Reader).Read(0xc820148540, 0xc820274000, 0x8000, 0x8000, 0xc8202fa318, 0x0, 0x0)
    /home/bfosberry/.go/src/github.com/docker/docker/pkg/progress/progressreader.go:30 +0x73
io.(*multiReader).Read(0xc8203fa2a0, 0xc820274000, 0x8000, 0x8000, 0x8000, 0x0, 0x0)
    /usr/local/go/src/io/multi.go:13 +0xa4
io.copyBuffer(0x7f5860e0ed50, 0xc8203d1a20, 0x7f5860e0ec48, 0xc8203fa2a0, 0xc820274000, 0x8000, 0x8000, 0x70000, 0x0, 0x0)
    /usr/local/go/src/io/io.go:381 +0x247
io.Copy(0x7f5860e0ed50, 0xc8203d1a20, 0x7f5860e0ec48, 0xc8203fa2a0, 0xc8203d1a20, 0x0, 0x0)
    /usr/local/go/src/io/io.go:351 +0x64
net/http.(*transferWriter).WriteBody(0xc820153f10, 0x7f5860e0ecf8, 0xc8202fa318, 0x0, 0x0)
    /usr/local/go/src/net/http/transfer.go:218 +0x2b2
net/http.(*Request).write(0xc8203121c0, 0x7f5860e0d900, 0xc820156e80, 0x0, 0x0, 0x0, 0x0)
    /usr/local/go/src/net/http/request.go:462 +0xbb9
net/http.(*persistConn).writeLoop(0xc820090580)
    /usr/local/go/src/net/http/transport.go:1015 +0x27c
created by net/http.(*Transport).dialConn
    /usr/local/go/src/net/http/transport.go:686 +0xc9d

Proposal: complete integration (cli) tests

It would be cool to cover the project with some integration cli to be more confident on the contributions we all do 😅. What do you think ?

This would mean :

  • Define a general (and simple to start) structure to write tests and execute them (taking inspiration of other docker projects). This is actually already done 😅, didn't look as much as I should have 😊.
  • Not running integration tests when running tests — or separate the run of unit-tests and integration-tests and have tests run both. (right now, running tests runs integration)
  • Use docker to run the integration tests (just like on the docker engine project). It feels strange to use docker to run tests, but not the integration tests. Maybe the unit-tests too 😅.
  • Create an goal in the Makefile (to do make test-integration-cli)

PS : I'm more than willing to help on that and sends PRs, but made the issue to get your thought on this and to keep track of the work that would be done on the subject.

Libcompose with Docker-Machine - x509: certificate signed by unknown authority

Now that libcompose is go gettable (thank you) I proceeded to finish incorporating it in my current project. During a basic project up test the following error is occurring. I've also tried simplified test code in a new project to reproduce.

I wasn't seen this issue prior to libcompose being go gettable. Not sure if it's related

Error Log

INFO[0000] Project [my-compose]: Starting project
INFO[0000] [0/5] [search]: Starting
ERRO[0000] Failed Starting search : Get https://192.168.99.100:2376/v1.20/version: x509: certificate signed by unknown authority

Test Code

Using the example as is from the README and any existing docker-compose.yml in the same directory:

package main

import (
    "log"

    "github.com/docker/libcompose/docker"
    "github.com/docker/libcompose/project"
)

func main() {
    project, err := docker.NewProject(&docker.Context{
        Context: project.Context{
            ComposeFile: "docker-compose.yml",
            ProjectName: "my-compose",
        },
    })

    if err != nil {
        log.Fatal(err)
    }

    project.Up()
}

How do I set the docker host when using libcompose?

I am following the example in README.md:

func main() {
    project, err := docker.NewProject(&docker.Context{
        Context: project.Context{
            ComposeFiles: []string{"docker-compose.yml"},
            ProjectName:  "my-compose",
        },
    })

    if err != nil {
        log.Fatal(err)
    }

    project.Up()
}

I tried creating my own EnvironmentLookup:

type EnvLookup struct {
    endpoint string
}

func (c EnvLookup) Lookup(key, serviceName string, config *project.ServiceConfig) []string {
    if key == "DOCKER_HOST" {
        return []string{fmt.Sprintf("%s=%s", "DOCKER_HOST", "tcp://192.168.1.100:2375")}
    }

    return []string{}
}


func main() {
    project, err := docker.NewProject(&docker.Context{
        Context: project.Context{
            ComposeFiles: []string{"docker-compose.yml"},
            ProjectName:  "my-compose",
            EnvironmentLookup: EnvLookup{},
        },
    })

    if err != nil {
        log.Fatal(err)
    }

    project.Up()
}

However, EnvLookup is never called. What's the correct way to set the docker host (within my go code, rather than using the DOCKER_HOST environment variable)?

``libcompose-cli down`` should also remove containers

Behaviour should be consistent with docker-compose down

$ docker-compose down
Stopping sample_redis_1 ... done
Stopping sample_web_1 ... done
Removing sample_redis_1 ... done
Removing sample_web_1 ... done
 libcompose down
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose)
INFO[0000] Project [sample]: Stopping project
INFO[0000] [0/2] [redis]: Stopping
INFO[0000] [0/2] [web]: Stopping
INFO[0000] [0/2] [web]: Stopped
INFO[0000] [0/2] [redis]: Stopped
INFO[0000] Project [sample]: Project stopped

~/work/compose-files/sample ⌚ 14:26:45
$ libcompose ps
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose)
Name            Command                      State                      Ports
sample_web_1    top                          Exited (0) 16 seconds ago
sample_redis_1  /entrypoint.sh redis-server  Exited (0) 16 seconds ago

up command without detach should stop container on ctrl-c

When using docker-compose up, it starts all the containers in foreground (or kinda 😝) ; thus when issuing ctrl-c, it shuts them down. libcompose-cli on master does not.

$ docker-compose up 
# […]
# Hitting ctrl-c
$ docker-compose ps 
    Name                   Command               State    Ports 
---------------------------------------------------------------
compose_db_1    /docker-entrypoint.sh postgres   Exit 0         
compose_web_1   nginx -g daemon off;             Exit 0 

$ libcompose-cli up
# […]
# Hitting ctrl-c
$ libcompose-cli ps
Name           Command                         State         Ports
compose_db_1   /docker-entrypoint.sh postgres  Up 5 seconds  5432/tcp
compose_web_1  nginx -g 'daemon off;'          Up 6 seconds  443/tcp, 80/tcp

Volumes are not mounted correctly on the container

On Windows if you have a simple data container

data:
  image: busybox
  volumes:
    - /data
    - /var/lib/mysql

The filesystem of the container will show it mounted the following (ls -l /)

\data
\var\lib\mysql

Applying the windows slash pattern results in unusable volumes.

`libcompose-cli rm` log messages are confusing.

In log message for libcompose-cli rm, before asking for deletion, it prints containers deleting message.

$ libcompose-cli rm
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose)
INFO[0000] [0/2] [web]: Deleting
INFO[0000] [0/2] [redis]: Deleting
INFO[0000] [0/2] [redis]: Deleted
Going to remove a40d47f53646656412a6a9162d0c4c6cfc2af66e52ed5550b820976c51d9997c, 2fbc01d5bfc23d7bd8467b113fa44f357e7998460121ce1c4513df316151dd70
Are you sure? [yN]
INFO[0000] [0/2] [web]: Deleted
y
INFO[0001] Project [sample]: Deleting project
INFO[0001] [0/2] [web]: Deleting
INFO[0001] [0/2] [redis]: Deleting
INFO[0001] [0/2] [redis]: Deleted
INFO[0001] [0/2] [web]: Deleted
INFO[0001] Project [sample]: Project deleted

Should be equivalent to

$ docker-compose rm
Going to remove sample_web_3, sample_web_2, sample_web_1, sample_redis_1
Are you sure? [yN] y
Removing sample_web_3 ... done
Removing sample_web_2 ... done
Removing sample_web_1 ... done
Removing sample_redis_1 ... done

Is project.Up() synchronous?

I am running project.Up(), and then after that, using the engine-api package to query the daemon to see which containers have exited and failed.

I am seeing some asynchronous looking behaviour:

For example: Sometimes, it returns an empty array (maybe the containers were not yet started before client.ContainerList() ran?

Sometimes, I get 1 container (but there should be 2).

If project.Up() is indeed asynchronous, is there a channel to wait on it for it to complete?

Quoted numbers are converted to int types in interpolation, then fail to parse

Given:

sample:
  image: amazon/amazon-ecs-sample
  environment:
    test_int_as_string:  "100"

The result is a panic:

panic: interface conversion: interface is int64, not string [recovered]
    panic: interface conversion: interface is int64, not string

goroutine 1 [running]:
github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml.recovery(0xc82016e728)
    /Users/lachlan/.go/src/github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml/decode.go:83 +0xb4
github.com/docker/libcompose/project.(*MaporEqualSlice).UnmarshalYAML(0xc8201b6418, 0x600690, 0x15, 0x4123a0, 0xc8201911a0, 0x0, 0x0)
    /Users/lachlan/.go/src/github.com/docker/libcompose/project/types_yaml.go:190 +0x4cc
github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml.(*Decoder).mapping.func1(0xfa4fc8, 0xc8201b6418, 0xc82016df50, 0xc820185500)
    /Users/lachlan/.go/src/github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml/decode.go:385 +0x81
github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml.(*Decoder).mapping(0xc820185500, 0x411460, 0xc8201dc340, 0xd4)
    /Users/lachlan/.go/src/github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml/decode.go:396 +0x29f
github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml.(*Decoder).parse(0xc820185500, 0x505ee0, 0xc8201b6418, 0xd9)
    /Users/lachlan/.go/src/github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml/decode.go:206 +0x852
github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml.(*Decoder).mappingStruct(0xc820185500, 0x5894e0, 0xc8201b6300, 0xd9)
    /Users/lachlan/.go/src/github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml/decode.go:493 +0x503
github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml.(*Decoder).mapping(0xc820185500, 0x5894e0, 0xc8201b6300, 0xd9)
    /Users/lachlan/.go/src/github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml/decode.go:402 +0x790
github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml.(*Decoder).parse(0xc820185500, 0x3eda00, 0xc8201b6300, 0x16)
    /Users/lachlan/.go/src/github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml/decode.go:206 +0x852
github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml.(*Decoder).document(0xc820185500, 0x3eda00, 0xc8201b6300, 0x16)
    /Users/lachlan/.go/src/github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml/decode.go:182 +0x19a
github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml.(*Decoder).Decode(0xc820185500, 0x3eda00, 0xc8201b6300, 0x0, 0x0)
    /Users/lachlan/.go/src/github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml/decode.go:133 +0x27a
github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml.Unmarshal(0xc8201a64e0, 0xc1, 0xc1, 0x3eda00, 0xc8201b6300, 0x0, 0x0)
    /Users/lachlan/.go/src/github.com/docker/libcompose/vendor/github.com/cloudfoundry-incubator/candiedyaml/decode.go:102 +0xf5
github.com/docker/libcompose/utils.Convert(0x4a86c0, 0xc820190b40, 0x3eda00, 0xc8201b6300, 0x0, 0x0)
    /Users/lachlan/.go/src/github.com/docker/libcompose/utils/util.go:66 +0xcc
github.com/docker/libcompose/project.readEnvFile(0xfa4ca8, 0x884698, 0x7fff5fbff959, 0x26, 0xc820190b40, 0x4177a0, 0x0, 0x0)
    /Users/lachlan/.go/src/github.com/docker/libcompose/project/merge.go:76 +0x9e
github.com/docker/libcompose/project.parse(0xfa4ca8, 0x884698, 0xfa4cd0, 0x884698, 0x7fff5fbff959, 0x26, 0xc820190b40, 0xc820190a80, 0xc8200a38c0, 0x0, ...)
    /Users/lachlan/.go/src/github.com/docker/libcompose/project/merge.go:158 +0x79
github.com/docker/libcompose/project.mergeProject(0xc8200b1400, 0xc820184e00, 0x102, 0x302, 0xc82016f0f8, 0x0, 0x0)
    /Users/lachlan/.go/src/github.com/docker/libcompose/project/merge.go:47 +0x300
github.com/docker/libcompose/project.(*Project).Load(0xc8200b1400, 0xc820184e00, 0x102, 0x302, 0x0, 0x0)
    /Users/lachlan/.go/src/github.com/docker/libcompose/project/project.go:127 +0xf6
github.com/docker/libcompose/project.(*Project).Parse(0xc8200b1400, 0x0, 0x0)
    /Users/lachlan/.go/src/github.com/docker/libcompose/project/project.go:73 +0x148
github.com/docker/libcompose/docker.NewProject(0xc8200ba540, 0xc8200ba540, 0x0, 0x0)
    /Users/lachlan/.go/src/github.com/docker/libcompose/docker/project.go:40 +0x3b3
github.com/99designs/ecs-cli/compose.TransformComposeFile(0x7fff5fbff959, 0x26, 0x7fff5fbff951, 0x4, 0x1, 0x0, 0x0)
    /Users/lachlan/Projects/99designs/go/src/github.com/99designs/ecs-cli/compose/transform.go:26 +0x251
main.main()
    /Users/lachlan/Projects/99designs/go/src/github.com/99designs/ecs-cli/cli/ecs-compose-task/main.go:23 +0x183

Split integration test

Now all the integration tests are in basic_test.go, it's no clear.
I want to split tests by command, so that we can enhance every command test in their own file.

Is the LGPL license of the YAML library compatible?

I'm really not sure on this one but we probably should check with a legal team if we haven't already to make sure that vendoring / statically linking to the YAML parsing library, which is released under the LGPL license, is compatible with the overall project's Apache license.

libcompose should be go gettable/buildable/testable

As documented, the build and test process are currently driven by a Makefile performing actions in a container.

Since this is a library, it should be go gettable/buildable/testable out of the box without having to run a non-standard process.

There are apparently dependencies against non-pkg parts of docker which make it tricky to do so.

Strict yaml validation?

I have mistakes in my docker-compose.yml file such as mistyped keys (fooooocommand):

consul:
  fooooocommand: -server -node master2 -advertise 10.0.0.6 -join 10.0.0.4 -join 10.0.0.5
  image: progrium/consul
  ports:
  - 8300:8300

libcompose seems to be ignoring them, however docker-compose does strict validation and errors out. We probably need the same behavior here.

Create a release.sh script

in hack/release.sh that run tests, merge into master, tag the branch and publish it to github release pages.

Revamp cli with spf13/cobra

Switch to spf13/cobra for the CLI and make it more docker-compose like.

  • docker-compose
$ docker-compose up -d
Creating toto_simple_1
Creating toto_another_1
 $ docker-compose stop
Stopping toto_another_1 ... done
Stopping toto_simple_1 ... done
 $ docker-compose rm -f 
Going to remove toto_another_1, toto_simple_1
Removing toto_another_1 ... done
Removing toto_simple_1 ... done
  • libcompose
libcompose-cli up -d
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose) 
INFO[0000] Project [toto]: Starting project             
INFO[0000] [0/2] [simple]: Starting                     
INFO[0000] [0/2] [another]: Starting                    
INFO[0000] [1/2] [simple]: Started                      
INFO[0000] [2/2] [another]: Started 
$ libcompose-cli stop
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose) 
INFO[0000] Project [toto]: Stopping project             
INFO[0000] [0/2] [simple]: Stopping                     
INFO[0000] [0/2] [another]: Stopping                    
INFO[0000] [0/2] [another]: Stopped                     
INFO[0000] [0/2] [simple]: Stopped                      
INFO[0000] Project [toto]: Project stopped 
$ ibcompose-cli rm -f
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose) 
INFO[0000] [0/2] [simple]: Deleting                     
INFO[0000] [0/2] [another]: Deleting                    
INFO[0000] [0/2] [simple]: Deleted                      
INFO[0000] [0/2] [another]: Deleted                     
INFO[0000] Project [toto]: Deleting project             
INFO[0000] [0/2] [simple]: Deleting                     
INFO[0000] [0/2] [another]: Deleting                    
INFO[0000] [0/2] [simple]: Deleted                      
INFO[0000] [0/2] [another]: Deleted                     
INFO[0000] Project [toto]: Project deleted

Support image push

What are your thoughts on a form of image push with AuthConfig support? Is that something which should be added? Where would that fit?

Bundles is empty with OSX build

Building on OSX go to sucess, unfortunately bundle folder is empty.

Removing intermediate container cd4a37948ffa
Step 16 : COPY . /go/src/github.com/docker/libcompose
 ---> 263d4756d9f4
Removing intermediate container 6dcf92f3809f
Successfully built 263d4756d9f4
docker run --rm -it --privileged -e OS_PLATFORM_ARG -e OS_ARCH_ARG -e DOCKER_TEST_HOST -e TESTDIRS -e TESTFLAGS -e TESTVERBOSE  "libcompose-dev:master" ./script/make.sh binary
nn---> Making bundle: binary (in .)
Number of parallel builds: 1

-->   windows/amd64: github.com/docker/libcompose/cli/main
-->      darwin/386: github.com/docker/libcompose/cli/main
-->    darwin/amd64: github.com/docker/libcompose/cli/main
-->       linux/386: github.com/docker/libcompose/cli/main
-->     linux/amd64: github.com/docker/libcompose/cli/main
-->       linux/arm: github.com/docker/libcompose/cli/main
-->     windows/386: github.com/docker/libcompose/cli/main

bash-3.2$ ls -la bundles/
total 0
drwxr-xr-x   2 apple  staff   68 Nov  8 00:36 .
drwxr-xr-x  27 apple  staff  918 Nov  8 00:36 ..

Proposal : Enable `golint` on the code base

Taking the description directly from docker#14756.

We want to enable golint on our codebase for several reasons:

  • We want to improve code quality
  • We need objective filters on quality to help us discriminate bad pull requests

There is some work to do, but I think it's would be better to do it as early as possible (easier to do). And as libcompose is intended to be used as a library more than just a cli, documentation and quality is import I think.

WDYT about that ?
If you think we should do it, we'll keep track of the work on this issue then 😉.

Package (and linting status) :

  • cli/app #31
  • cli/command #31
  • cli/docker/app #31
  • cli/logger #31
  • cli/main
  • docker #51
  • logger #50
  • lookup
  • project #49
  • utils #37
  • version

Allow running command

It would be nice if it was possible to run a command in a service as an equivalent of

docker-compose run `service` `command`

I might be able to submit a PR for this within the next couple of days.

Cheers

project.Log does not push logs

I'm using a custom LoggerFactory within my project context. The factory.Create function is called, and a logger returned, however Out and Err are never called on the logger despite the container having logs, and me calling the proj.Logs function.

package libcompose

import (
    "fmt"
    "io"

    "github.com/codeship/go-ledge"
    "github.com/codeship/jet/log"
    "github.com/docker/libcompose/logger"
)

type LoggerFactory interface {
    Create(string) logger.Logger
    RegisterLoggers(string, jet_log.Category, ledge.Event, ledge.Event)
}

type loggerFactory struct {
    loggers     map[string]logger.Logger
    ledgeLogger ledge.Logger
}

func NewLoggerFactory(ledgeLogger ledge.Logger) LoggerFactory {
    return &loggerFactory{
        ledgeLogger: ledgeLogger,
        loggers:     make(map[string]logger.Logger),
    }
}

func (l *loggerFactory) Create(name string) logger.Logger {
    fmt.Printf("Requested logger for %s\n", name)
    // TODO remove, picking the first logger for now
    for _, v := range l.loggers {
        fmt.Println("Returning logger %+v\n", v)
        // test logger
                v.Out([]byte("test"))
        v.Err([]byte("foo"))
        return v
    }
    return l.loggers[name]
}

func (l *loggerFactory) RegisterLoggers(name string, category jet_log.Category, OutEvent ledge.Event, ErrEvent ledge.Event) {
    loggerAdapter := &loggerAdapter{
        OutputWriter: l.ledgeLogger.WithContext(category).InfoWriter(OutEvent),
        ErrorWriter:  l.ledgeLogger.WithContext(category).InfoWriter(ErrEvent),
    }
    l.loggers[name] = loggerAdapter
}

type loggerAdapter struct {
    OutputWriter io.Writer
    ErrorWriter  io.Writer
}

func (l *loggerAdapter) Out(data []byte) {
    _, err := l.OutputWriter.Write(data)
    if err != nil {
        fmt.Printf("error writing: %s\n", err.Error())
        // TODO handle error case
    }

}
func (l *loggerAdapter) Err(data []byte) {
    _, err := l.ErrorWriter.Write(data)
    if err != nil {
        fmt.Printf("error writing: %s\n", err.Error())
        // TODO handle error case
    }
}

Output issues on windows

I am running project.Up() against a brand new docker host. Libcompose is being used to as part of a program that runs on windows.

When the docker daemon pulls an image, output lines are not replaced, but a new line is created:

latest: Pulling from library/postgres
7268d8f794c4: Already exists
a3ed95caeb02: Already exists
8b7b4081cf2f: Pulling fs layer
b56c122a9ff3: Pulling fs layer
be2d56e82721: Pulling fs layer
72be9ca5a69d: Pulling fs layer
a3ed95caeb02: Pulling fs layer
9195680bca6c: Pulling fs layer
05395a3576bb: Pulling fs layer
d3a137ddbe53: Pulling fs layer
2dbf2ca048a1: Pulling fs layer
c28784ac0b5d: Pulling fs layer
9b433e892030: Pulling fs layer
72be9ca5a69d: Waiting
05395a3576bb: Waiting
d3a137ddbe53: Waiting
2dbf2ca048a1: Waiting
c28784ac0b5d: Waiting
a3ed95caeb02: Waiting
9195680bca6c: Waiting
9b433e892030: Waiting
b56c122a9ff3: Verifying Checksum
b56c122a9ff3: Download complete
8b7b4081cf2f: Verifying Checksum
8b7b4081cf2f: Pull complete
8b7b4081cf2f: Pull complete
b56c122a9ff3: Pull complete
b56c122a9ff3: Pull complete
be2d56e82721: Verifying Checksum
be2d56e82721: Download complete
a3ed95caeb02: Verifying Checksum
a3ed95caeb02: Download complete
be2d56e82721: Pull complete
be2d56e82721: Pull complete
05395a3576bb: Verifying Checksum
05395a3576bb: Download complete
9195680bca6c: Verifying Checksum
9195680bca6c: Download complete
d3a137ddbe53: Verifying Checksum
d3a137ddbe53: Download complete
c28784ac0b5d: Download complete
9b433e892030: Verifying Checksum
9b433e892030: Download complete
72be9ca5a69d: Verifying Checksum
72be9ca5a69d: Download complete
72be9ca5a69d: Pull complete
72be9ca5a69d: Pull complete
a3ed95caeb02: Pull complete
a3ed95caeb02: Pull complete
9195680bca6c: Pull complete
9195680bca6c: Pull complete
05395a3576bb: Pull complete
05395a3576bb: Pull complete
d3a137ddbe53: Pull complete
d3a137ddbe53: Pull complete
2dbf2ca048a1: Verifying Checksum
2dbf2ca048a1: Download complete
2dbf2ca048a1: Pull complete
2dbf2ca048a1: Pull complete
c28784ac0b5d: Pull complete
c28784ac0b5d: Pull complete
9b433e892030: Pull complete
9b433e892030: Pull complete
Digest: sha256:622c5c003788971221e94f642fea0af341b580dc9cc71d9a0159769c2d8fa905

Numeric attributes should also be accepted as strings

Docker Compose accepts strings for all attributes that accept numbers. For example, these both should be valid:

mem_limit: 40000000
mem_limit: "40000000"

Currently, only the former is accepted by libcompose. This also breaks interpolation since variables are always inserted as strings.

One possible solution is to change the types from int64 to the following:

type Stringorint64 struct {
    value int64
}

func (s Stringorint64) MarshalYAML() (interface{}, error) {
    return s.value, nil
}

func (s *Stringorint64) UnmarshalYAML(unmarshal func(interface{}) error) error {
    var str string
    err := unmarshal(&str)

    if err != nil {
        return err
    }

    i, err := strconv.ParseInt(str, 10, 64)

    if err != nil {
        return err
    }

    s.value = i

    return nil
}

func (s *Stringorint64) Value() int64 {
    return s.value
}

This works, but it's a little cumbersome to have to use Value() to access those attributes. Any better ideas to go about this?

The rm command always require `--force`

The rm only works with specifying the --force flag. This does not work as expected and as docker-compose does.

The --force works as expected when the container are still running. Even the force doesn't work as expected, docker-compose rm --force just don't ask for confirmation, but it will not remove running containers at all.

Steps to reproduce

Let's use the following docker-compose.yml :

web:
    image: nginx

Using libcompose-cli

$ libcompose-cli up -d
# […]
$ libcompose-cli stop
# […]
$ libcompose-cli rm
# […]
FATA[0000] Will not remove all services without --force 

Using docker-compose

$ docker-compose up -d
Creating compose_web_1
$ docker-compose stop
Stopping compose_web_1 ... done
$ docker-compose rm
Going to remove compose_web_1
Are you sure? [yN] y
Removing compose_web_1 ... done

libcompose{,-cli}/docker-compose "commutable"

Should we be able to start a compose using docker-compose and then using libcompose-cli or directly the API to continue managing those services.

I do think it should be commutable, which is not the case right now.

$ docker-compose up -d
Creating compose_web_1
Creating compose_db_1
$ libcompose-cli ps
# doesn't output anything (except the warning log)

$ libcompose-cli up -d
INFO[0000] Project [compose]: Starting project          
INFO[0000] [0/2] [web]: Starting                        
INFO[0000] [0/2] [db]: Starting                         
INFO[0000] [1/2] [web]: Started                         
INFO[0000] [2/2] [db]: Started 
$ docker-compose ps
ERROR: 
Compose found the following containers without labels:

    compose_web_1
    compose_db_1

As of Compose 1.3.0, containers are identified with labels instead of naming
convention. If you want to continue using these containers, run:

    $ docker-compose migrate-to-labels

Alternatively, remove them:

    $ docker rm -f compose_web_1 compose_db_1

We do generate labels with libcompose but they seem to be outdated and thus not recognized by docker-compose :

  • with docker-compose
"Labels": {
    "com.docker.compose.config-hash": "9fce3077c9cdae0da859eb66a11677756fcff8703bcbffac810c1ce3ea33eb65",
    "com.docker.compose.container-number": "1",
    "com.docker.compose.oneoff": "False",
    "com.docker.compose.project": "compose",
    "com.docker.compose.service": "db",
    "com.docker.compose.version": "1.5.0"
}
  • with libcompose{,-cli}
"Labels": {
    "io.docker.compose.config-hash": "f85a1f8e9e5bbde593aed5395ea0a5048ecceb62",
    "io.docker.compose.name": "compose_db_1",
    "io.docker.compose.project": "compose",
    "io.docker.compose.service": "db"
}

Strange "volumes_from" behaviour?

I've been playing around with libcompose-cli on the Raspberry Pi (ARM) and ran into what appears to be an issue with volumes_from. The following, which I would do using docker-compose, does not work:

server/docker-compose.yml

first:
  image: sander85/rpi-busybox
  volumes:
    - /bundle
second:
  image: sander85/rpi-busybox
  volumes_from:
    - first

Output

$ libcompose-cli up
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose) 
INFO[0000] [0/2] [first]: Starting                      
INFO[0003] [1/2] [first]: Started                       
INFO[0003] [1/2] [second]: Starting                     
ERRO[0004] Failed Starting second : 500 Internal Server Error: Cannot start container b341d02578c2cedd8e5590823e9437097885dfaf26ef240436fe72109a426f27: Could not apply volumes of non-existent container "first".

ERRO[0004] Failed to start: second : 500 Internal Server Error: Cannot start container b341d02578c2cedd8e5590823e9437097885dfaf26ef240436fe72109a426f27: Could not apply volumes of non-existent container "first".

FATA[0004] 500 Internal Server Error: Cannot start container b341d02578c2cedd8e5590823e9437097885dfaf26ef240436fe72109a426f27: Could not apply volumes of non-existent container "first".

However, if I use the full container name in volumes_from (that is, "server_first_1" instead of just "first"), it appears to make better progress:

$ libcompose-cli up
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose) 
ERRO[0000] Failed to find server_first_1                
ERRO[0000] Failed to find server_first_1                
INFO[0000] [0/2] [second]: Starting                     
INFO[0006] [1/2] [first]: Started                       
INFO[0006] [2/2] [second]: Started 

Interestingly I did some testing on an x86 machine and the first docker_compose.yml (using the appropriate busybox image of course) works on the second attempt.

$ ./libcompose-cli_linux-amd64 up
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose) 
INFO[0000] [0/2] [first]: Starting                      
INFO[0000] [1/2] [first]: Started                       
INFO[0000] [1/2] [second]: Starting                     
ERRO[0004] Failed Starting second : 409 Conflict: Conflict. The name "bundles_second_1" is already in use by container 315ccdcd6f77. You have to delete (or rename) that container to be able to reuse that name.

ERRO[0004] Failed to start: second : 409 Conflict: Conflict. The name "bundles_second_1" is already in use by container 315ccdcd6f77. You have to delete (or rename) that container to be able to reuse that name.

FATA[0004] 409 Conflict: Conflict. The name "bundles_second_1" is already in use by container 315ccdcd6f77. You have to delete (or rename) that container to be able to reuse that name.

$ ./libcompose-cli_linux-amd64 up
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose) 
INFO[0000] [0/2] [first]: Starting                      
INFO[0000] [1/2] [first]: Started                       
INFO[0000] [1/2] [second]: Starting                     
INFO[0000] [2/2] [second]: Started                      
INFO[0000] Project [bundles]: Project started 

I understand that libcompose is not meant to be a replacement for docker-compose but I thought I'd leave this here anyway just in case it contains useful information.

Libcompose 1.0.0 <-> Compose 1.6.0 features

I've tried to come up with a feature grid / missing feature recap' in order to know what to do to achieve a complete compatibility with docker-compose ; and thus making it easy to follow afterwards.

The goal of this issue is to keep track of what is done and what's not. As it might be a little big, we'll probably split this up into small issues for each missing part.

I've created a 1.0.0 milestone (with a cool name I think 😝) to start having some goals and helping us to focus. I also want to follow the releases of the other docker projects (but that's another story).

Anyway, this is what I've got so far. Note there is some stuff that are present in libcompose but not in docker-compose.

ping @docker/libcompose-maintainers @ibuildthecloud @imikushin for feedback.
I probably missed some stuff so, let me know 😉.


Composefile

Reference (v1)

  • build
  • cap_add, cap_drop
  • command
  • cgroup_parent #134 (PR)
  • container_name
  • devices
  • dns
  • dns_search
  • dockerfile
  • env_file
  • environment
  • expose
  • extends
  • external_links
  • extra_hosts
  • image
  • labels
  • links
  • log_driver
  • log_opt
  • net
  • pid
  • ports
  • security_opt
  • ulimits #134 (PR)
  • volumes, volume_driver
  • volumes_from
  • cpu_shares
  • cpu_quota #134 (PR)
  • cpuset
  • domainname
  • entrypoint
  • hostname
  • ipc
  • mac_address #134 (PR)
  • mem_limit
  • memswap_limit
  • privileged
  • read_only
  • restart
  • stdin_open
  • tty
  • user
  • working_dir
  • uts not supported in docker-compose ?

Reference (v2)

To support 👼

CLI

Commands & flags

Commands

  • build
    • --no-cache
    • --pull
    • --force-rm
  • config
    • -q, --quiet
    • --services
  • create
  • down (currently an alias to stop)
  • help
    • Some help are not working properly (e.g. libcompose-cli help build) — carry #102.
  • kill
    • -s SIGNAL (libcompose-cli supports --signal)
  • logs
    • --no-color
    • --lines support by libcompose-cli but not by docker-compose
  • pause #128 (PR)
  • port
    • --protocol=proto
    • --index=index
  • ps
    • -q
  • pull
    • --ignore-pull-failures
    • --allow-insecure-ssl deprecated, won't add.
  • restart
    • -t, --timeout TIMEOUT
  • rm #129 (issue), #130 (PR)
    • -f, --force
    • -v
  • run #38 (issue), #45 #133 (PR)
    • --allow-insecure-ssl deprecated, won't add.
    • -d
    • --name NAME
    • --entrypoint CMD
    • -e KEY=VAL
    • -u, --user=""
    • --no-deps
    • --rm
    • -p, --publish=[]
    • --service-ports
    • -T
  • scale
    • -t, --timeout TIMEOUT
  • start
    • -d support by libcompose-cli but not by docker-compose
  • stop
    • -t, --timeout TIMEOUT
  • unpause #128 (PR)
  • up
    • --allow-insecure-ssl deprecated, won't add.
    • -d
    • --no-color
    • --no-deps
    • --force-recreate
    • --no-recreate
    • --no-build #136
    • -t, --timeout TIMEOUT
  • migrate-to-labels: to implement ?
  • version
    • --short

Global flags

  • -f, --file FILE
  • -p, --project-name NAME
  • --x-networking
  • --x-network-driver
  • --verbose is also --debug on libcompose-cli
  • -v, --version
  • TLS related flags (--tls, --tlsverify, --tlscacert,
    --tlscert, --tlskey) : how it is handled in docker-compose (compose#1569)
  • --configdir
  • --help, -h should move to help only ?
  • --version, -v should move to version only

Environment Variables

  • COMPOSE_PROJECT_NAME
  • COMPOSE_FILE
  • COMPOSE_API_VERSION
  • DOCKER_* (DOCKER_HOST, DOCKER_TLS_VERIFY,
    DOCKER_CERT_PATH, COMPOSE_HTTP_TIMEOUT)

Miscellaneous

  • yaml schema validation #34 (issue) and #99 (PR)
  • Refactor flag system (switch to cobra or …) ?
  • completion support ? (probably re-use docker-compose one but…)
  • docker-compose up -d then libcompose-cli ps doesn't work >_<, see issue #132
  • What to do about the warn message
  • not allow restart: no but require restart: 'no' #72
  • multiple file (-f and docker-compose.override.yml) #116
  • Preserve volume data when containers are created
  • Only recreate containers that have changed
  • Variables and moving a composition between environments
  • Support multiple version of compose.yml (with key version: 2), see docker/compose#2421

Maybe

Those are in discussion / review on docker-compose so as long as they are not merged or completely sure to get in 1.6.0, it probably shouldn't be in 1.0.0 of libcompose. I'll create issues to track them down.

Integration API tests

The main value of libcompose is the lib part, not the generated binary.
Thus it would make way sence to have more API integration tests than actual CLI tests.

This also has to be thought with the acceptance test in place too (to not duplicate work for nothing).

Inaccurate service name in the logs

This yml file:

sleep1:
  command: sleep 8888888
  image: busybox
  restart: always
sleep2:
  command: sleep 9999999
  image: busybox
  restart: always

results in the following logs (from the example/main.go):

INFO[0000] [0/2] [sleep1]: Starting
INFO[0000] [1/2] [sleep1]: Started

note that there are two sleep1s. However docker ps shows that sleep1 and sleep2 is created successfully...

So if I add another sleep3

sleep1:
  command: sleep 8888888
  image: busybox
  restart: always
sleep2:
  command: sleep 9999999
  image: busybox
  restart: always
sleep3:
  command: sleep 7777777
  image: busybox
  restart: always

it works fine:

INFO[0000] [0/3] [sleep1]: Starting
INFO[0001] [1/3] [sleep2]: Started
INFO[0001] [2/3] [sleep3]: Started

I'm guessing there's something wrong with the logs.

Support loggerfactory for building images

I'd rather not capture and lock stdout :P so what would work better for me is to support logger factories for creating loggers for capturing build output for specific services or images. I'd like some guidance on how this would integrate well. My initial thoughts are to tie into a new BuildLoggerFactory specifically, for https://github.com/docker/libcompose/blob/master/docker/builder.go#L60, and default it to a stdout logger sto preserve backwards compatibility.

thoughts?

Two improvements about libcompose command help

improvements

No.1

The command usage is not displayed for the command which has no options.
Maybe this is a bug.

No.2

Many commands of libcompose support specifying some services as the command args.
But this is not displayed in the command help .

Example

Current display:


root@SZX1000041895:~/sleep# libcompose-cli help create
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose)
Usage: libcompose-cli create

Should be:


root@SZX1000041895:~/sleep# ./libcompose-cli help create
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose)
Usage: libcompose-cli create [SERVICE...]

Create all services but do not start

Support Bash Completion

This idea is to add bash completion for libcompose.

Below is a bash completion example:


// press tab twice after input "docker k", "kill" is completed and containers are printed.
root@SZX1000041895:~/sleep# docker ps
CONTAINER ID        IMAGE                        COMMAND             CREATED             STATUS              PORTS               NAMES
d3b5210874af        mbrt/golang-vim-dev:latest   "/bin/bash"         2 days ago          Up 2 days                               myvim

root@SZX1000041895:~/sleep# docker kill
d3b5210874af9c4d9fe9041a47928e551c719c2281758e73558f24234d36b807  myvim

//press tab when input "docker r", then all command that begins with "r" will be displayed.
root@SZX1000041895:~/sleep# docker help r
rename   restart  rm       rmi      run

`make all` doesn't do "all"

I'm not sure how the build system and Jenkins is setup, so I don't fully understand the Makefile setup. Currently make all just does a build and not run tests. I keep forgetting to run test or validate before I check in, resulting in failed builds on Jenkins.

Is there a simple target I can run that will run all tests and validation basically simulating what Jenkins would do?

@vdemeester

Implement `config` subcommand

Add the config subcommand for libcompose.

# config

Usage: config [options]

Options:
-q, --quiet     Only validate the configuration, don't print
                anything.
--services      Print the service names, one per line.

Validate and view the compose file.

Here is how it looks like

$ docker-compose config
networks: {}
services:
  another:
    command: top
    image: busybox:latest
  simple:
    command: top
    image: busybox:latest
version: '1'

Probably wait for #155 to be closed.

While scaling, libcompose do not prints error message for Port Clash.

expected behaviour

$ docker-compose scale web=2
WARNING: The "web" service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash.
Starting sample_web_1 ... error
Starting sample_web_2 ... done

ERROR: for sample_web_1  failed to create endpoint sample_web_1 on network bridge: Bind for 0.0.0.0:5000 failed: port is already allocated

Actual Behaviour

$ libcompose scale web=2
WARN[0000] Note: This is an experimental alternate implementation of the Compose CLI (https://github.com/docker/compose)
INFO[0000] Setting scale web=2...
WARN[0000] The "web" service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash.
WARN[0000] The mapping "." is ambiguous. In a future version of Docker, it will designate a "named" volume (see https://github.com/docker/docker/pull/14242). To prevent unexpected behaviour, change it to "./.".

command.go upCommand doesn't implement --abort-on-container-exit

Docker Compose has a feature docker-compose up --abort-on-container-exit which is particularly useful for running integration tests - where you need to watch the container shut down and get the exit code.

I'm trying to run the equivalent in Rancher, and ran into an error: Incorrect Usage

They haven't implemented it yet - because it isn't implemented in libcompose.

The Rancher guys have pointed to the command.go upCommand function as being the culprit.

My issue is: command.go upCommand doesn't implement --abort-on-container-exit

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.