Code Monkey home page Code Monkey logo

cli's Introduction

Docker CLI

PkgGoDev Build Status Test Status Go Report Card Codecov

About

This repository is the home of the Docker CLI.

Development

docker/cli is developed using Docker.

Build CLI from source:

docker buildx bake

Build binaries for all supported platforms:

docker buildx bake cross

Build for a specific platform:

docker buildx bake --set binary.platform=linux/arm64 

Build dynamic binary for glibc or musl:

USE_GLIBC=1 docker buildx bake dynbinary 

Run all linting:

docker buildx bake lint shellcheck

Run test:

docker buildx bake test

List all the available targets:

make help

In-container development environment

Start an interactive development environment:

make -f docker.Makefile shell

Legal

Brought to you courtesy of our legal counsel. For more context, please see the NOTICE document in this repo.

Use and transfer of Docker may be subject to certain restrictions by the United States and other governments.

It is your responsibility to ensure that your use and/or transfer does not violate applicable laws.

For more information, please see https://www.bis.doc.gov

Licensing

docker/cli is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

cli's People

Contributors

aaronlehmann avatar adshmh avatar akihirosuda avatar albers avatar allencloud avatar anusha-ragunathan avatar calavera avatar cpuguy83 avatar crazy-max avatar crosbymichael avatar dnephin avatar dvdksn avatar ehazlett avatar ijc avatar laurazard avatar lk4d4 avatar lowenna avatar mlaventure avatar riyazdf avatar runcom avatar sdurrheimer avatar silvin-lubecki avatar svendowideit avatar thajeztah avatar tiborvass avatar tonistiigi avatar vieux avatar vvoland avatar yongtang avatar yuexiao-wang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cli's Issues

[docker service ls] should recognize and format published port ranges

I have some services that publish lots of ports in a range to the Docker ingress network (this is a reverse proxy). The docker service ls command does not recognize the port range and instead outputs the individual port mappings which looks pretty ugly.

It would be really nice if the CLI could recognize a range and output a syntax like: *:5300-5499->5300-5499

In the example below, the neon-proxy-private service published ports 5300-5499 and the neon-proxy-public service published 5100-5299. You can see the long list of individual ports by scrolling to the right.

root@test-manager-0:~# docker service ls
ID                  NAME                   MODE                REPLICAS            IMAGE                                     PORTS
144o4lwwtjkp        neon-proxy-private     global              3/3                 neoncluster/neon-proxy:latest             *:5386->5386/tcp,*:5404->5404/tcp,*:5469->5469/tcp,*:5491->5491/tcp,*:5495->5495/tcp,*:5303->5303/tcp,*:5327->5327/tcp,*:5378->5378/tcp,*:5420->5420/tcp,*:5471->5471/tcp,*:5472->5472/tcp,*:5499->5499/tcp,*:5332->5332/tcp,*:5366->5366/tcp,*:5383->5383/tcp,*:5422->5422/tcp,*:5432->5432/tcp,*:5476->5476/tcp,*:5435->5435/tcp,*:5467->5467/tcp,*:5310->5310/tcp,*:5311->5311/tcp,*:5340->5340/tcp,*:5380->5380/tcp,*:5416->5416/tcp,*:5427->5427/tcp,*:5324->5324/tcp,*:5389->5389/tcp,*:5423->5423/tcp,*:5493->5493/tcp,*:5374->5374/tcp,*:5410->5410/tcp,*:5453->5453/tcp,*:5484->5484/tcp,*:5315->5315/tcp,*:5333->5333/tcp,*:5405->5405/tcp,*:5449->5449/tcp,*:5492->5492/tcp,*:5341->5341/tcp,*:5379->5379/tcp,*:5425->5425/tcp,*:5428->5428/tcp,*:5485->5485/tcp,*:5358->5358/tcp,*:5373->5373/tcp,*:5375->5375/tcp,*:5421->5421/tcp,*:5331->5331/tcp,*:5345->5345/tcp,*:5464->5464/tcp,*:5480->5480/tcp,*:5496->5496/tcp,*:5347->5347/tcp,*:5381->5381/tcp,*:5390->5390/tcp,*:5450->5450/tcp,*:5470->5470/tcp,*:5490->5490/tcp,*:5415->5415/tcp,*:5419->5419/tcp,*:5318->5318/tcp,*:5329->5329/tcp,*:5330->5330/tcp,*:5336->5336/tcp,*:5343->5343/tcp,*:5348->5348/tcp,*:5434->5434/tcp,*:5466->5466/tcp,*:5478->5478/tcp,*:5488->5488/tcp,*:5301->5301/tcp,*:5335->5335/tcp,*:5362->5362/tcp,*:5462->5462/tcp,*:5360->5360/tcp,*:5429->5429/tcp,*:5452->5452/tcp,*:5458->5458/tcp,*:5304->5304/tcp,*:5361->5361/tcp,*:5409->5409/tcp,*:5414->5414/tcp,*:5483->5483/tcp,*:5407->5407/tcp,*:5408->5408/tcp,*:5306->5306/tcp,*:5313->5313/tcp,*:5314->5314/tcp,*:5357->5357/tcp,*:5393->5393/tcp,*:5406->5406/tcp,*:5417->5417/tcp,*:5457->5457/tcp,*:5487->5487/tcp,*:5321->5321/tcp,*:5339->5339/tcp,*:5365->5365/tcp,*:5413->5413/tcp,*:5418->5418/tcp,*:5448->5448/tcp,*:5437->5437/tcp,*:5445->5445/tcp,*:5326->5326/tcp,*:5337->5337/tcp,*:5355->5355/tcp,*:5392->5392/tcp,*:5400->5400/tcp,*:5436->5436/tcp,*:5460->5460/tcp,*:5447->5447/tcp,*:5451->5451/tcp,*:5308->5308/tcp,*:5316->5316/tcp,*:5382->5382/tcp,*:5391->5391/tcp,*:5401->5401/tcp,*:5411->5411/tcp,*:5456->5456/tcp,*:5454->5454/tcp,*:5481->5481/tcp,*:5302->5302/tcp,*:5305->5305/tcp,*:5351->5351/tcp,*:5356->5356/tcp,*:5364->5364/tcp,*:5446->5446/tcp,*:5497->5497/tcp,*:5465->5465/tcp,*:5475->5475/tcp,*:5322->5322/tcp,*:5369->5369/tcp,*:5376->5376/tcp,*:5388->5388/tcp,*:5438->5438/tcp,*:5443->5443/tcp,*:5498->5498/tcp,*:5300->5300/tcp,*:5325->5325/tcp,*:5398->5398/tcp,*:5403->5403/tcp,*:5424->5424/tcp,*:5489->5489/tcp,*:5359->5359/tcp,*:5363->5363/tcp,*:5397->5397/tcp,*:5426->5426/tcp,*:5442->5442/tcp,*:5342->5342/tcp,*:5352->5352/tcp,*:5367->5367/tcp,*:5395->5395/tcp,*:5482->5482/tcp,*:5319->5319/tcp,*:5320->5320/tcp,*:5377->5377/tcp,*:5430->5430/tcp,*:5440->5440/tcp,*:5486->5486/tcp,*:5328->5328/tcp,*:5338->5338/tcp,*:5371->5371/tcp,*:5387->5387/tcp,*:5399->5399/tcp,*:5494->5494/tcp,*:5323->5323/tcp,*:5384->5384/tcp,*:5394->5394/tcp,*:5439->5439/tcp,*:5459->5459/tcp,*:5474->5474/tcp,*:5307->5307/tcp,*:5353->5353/tcp,*:5455->5455/tcp,*:5433->5433/tcp,*:5461->5461/tcp,*:5312->5312/tcp,*:5368->5368/tcp,*:5372->5372/tcp,*:5385->5385/tcp,*:5402->5402/tcp,*:5431->5431/tcp,*:5473->5473/tcp,*:5317->5317/tcp,*:5346->5346/tcp,*:5350->5350/tcp,*:5354->5354/tcp,*:5412->5412/tcp,*:5444->5444/tcp,*:5463->5463/tcp,*:5477->5477/tcp,*:5309->5309/tcp,*:5334->5334/tcp,*:5349->5349/tcp,*:5370->5370/tcp,*:5396->5396/tcp,*:5441->5441/tcp,*:5344->5344/tcp,*:5468->5468/tcp,*:5479->5479/tcp
9thxlldzgs8n        neon-proxy-manager     replicated          1/1                 neoncluster/neon-proxy-manager:latest
i7opo1sjkjfx        neon-log-collector     global              3/3                 neoncluster/neon-log-collector:latest
o2v30wl8vblp        neon-proxy-vault       global              3/3                 neoncluster/neon-proxy-vault:latest       *:5003->8200/tcp
tey8vhd69qbf        neon-log-kibana        global              3/3                 neoncluster/kibana:latest                 *:5001->5601/tcp
ttz1vc6yg242        neon-proxy-public      global              3/3                 neoncluster/neon-proxy:latest             *:5261->5261/tcp,*:5269->5269/tcp,*:5125->5125/tcp,*:5192->5192/tcp,*:5163->5163/tcp,*:5213->5213/tcp,*:5214->5214/tcp,*:5256->5256/tcp,*:5289->5289/tcp,*:5106->5106/tcp,*:5126->5126/tcp,*:5129->5129/tcp,*:5157->5157/tcp,*:5159->5159/tcp,*:5165->5165/tcp,*:5181->5181/tcp,*:5219->5219/tcp,*:5108->5108/tcp,*:5123->5123/tcp,*:5296->5296/tcp,*:5267->5267/tcp,*:5291->5291/tcp,*:5258->5258/tcp,*:5272->5272/tcp,*:5274->5274/tcp,*:5295->5295/tcp,*:5202->5202/tcp,*:5255->5255/tcp,*:5264->5264/tcp,*:5281->5281/tcp,*:5243->5243/tcp,*:5262->5262/tcp,*:5197->5197/tcp,*:5201->5201/tcp,*:5206->5206/tcp,*:5218->5218/tcp,*:5130->5130/tcp,*:5148->5148/tcp,*:5137->5137/tcp,*:5149->5149/tcp,*:5179->5179/tcp,*:5198->5198/tcp,*:5228->5228/tcp,*:5250->5250/tcp,*:5111->5111/tcp,*:5133->5133/tcp,*:5297->5297/tcp,*:5167->5167/tcp,*:5171->5171/tcp,*:5207->5207/tcp,*:5208->5208/tcp,*:5224->5224/tcp,*:5124->5124/tcp,*:5146->5146/tcp,*:5186->5186/tcp,*:5220->5220/tcp,*:5229->5229/tcp,*:5231->5231/tcp,*:5265->5265/tcp,*:5162->5162/tcp,*:5166->5166/tcp,*:5248->5248/tcp,*:5277->5277/tcp,*:5174->5174/tcp,*:5244->5244/tcp,*:5284->5284/tcp,*:5287->5287/tcp,*:5290->5290/tcp,*:5158->5158/tcp,*:5215->5215/tcp,*:5153->5153/tcp,*:5164->5164/tcp,*:5173->5173/tcp,*:5205->5205/tcp,*:5236->5236/tcp,*:5138->5138/tcp,*:5141->5141/tcp,*:5176->5176/tcp,*:5193->5193/tcp,*:5209->5209/tcp,*:5221->5221/tcp,*:5238->5238/tcp,*:5260->5260/tcp,*:5127->5127/tcp,*:5140->5140/tcp,*:5286->5286/tcp,*:5292->5292/tcp,*:5185->5185/tcp,*:5188->5188/tcp,*:5191->5191/tcp,*:5210->5210/tcp,*:5257->5257/tcp,*:5288->5288/tcp,*:5122->5122/tcp,*:5169->5169/tcp,*:5152->5152/tcp,*:5271->5271/tcp,*:5283->5283/tcp,*:5293->5293/tcp,*:5172->5172/tcp,*:5195->5195/tcp,*:5183->5183/tcp,*:5196->5196/tcp,*:5203->5203/tcp,*:5225->5225/tcp,*:5266->5266/tcp,*:5131->5131/tcp,*:5132->5132/tcp,*:5118->5118/tcp,*:5139->5139/tcp,*:5143->5143/tcp,*:5156->5156/tcp,*:5237->5237/tcp,*:5273->5273/tcp,*:5103->5103/tcp,*:5114->5114/tcp,*:5280->5280/tcp,*:5247->5247/tcp,*:5251->5251/tcp,*:5102->5102/tcp,*:5216->5216/tcp,*:5177->5177/tcp,*:5182->5182/tcp,*:5189->5189/tcp,*:5211->5211/tcp,*:5233->5233/tcp,*:5234->5234/tcp,*:5107->5107/tcp,*:5155->5155/tcp,*:5270->5270/tcp,*:5285->5285/tcp,*:5240->5240/tcp,*:5245->5245/tcp,*:5145->5145/tcp,*:5160->5160/tcp,*:5199->5199/tcp,*:5204->5204/tcp,*:5230->5230/tcp,*:5235->5235/tcp,*:5105->5105/tcp,*:5134->5134/tcp,*:5275->5275/tcp,*:5294->5294/tcp,*:5121->5121/tcp,*:5142->5142/tcp,*:5175->5175/tcp,*:5184->5184/tcp,*:5252->5252/tcp,*:5276->5276/tcp,*:5282->5282/tcp,*:5115->5115/tcp,*:5120->5120/tcp,*:5147->5147/tcp,*:5154->5154/tcp,*:5168->5168/tcp,*:5239->5239/tcp,*:5241->5241/tcp,*:5253->5253/tcp,*:5117->5117/tcp,*:5128->5128/tcp,*:5249->5249/tcp,*:5259->5259/tcp,*:5279->5279/tcp,*:5109->5109/tcp,*:5112->5112/tcp,*:5217->5217/tcp,*:5242->5242/tcp,*:5100->5100/tcp,*:5190->5190/tcp,*:5223->5223/tcp,*:5246->5246/tcp,*:5278->5278/tcp,*:5116->5116/tcp,*:5161->5161/tcp,*:5150->5150/tcp,*:5187->5187/tcp,*:5263->5263/tcp,*:5268->5268/tcp,*:5110->5110/tcp,*:5113->5113/tcp,*:5170->5170/tcp,*:5136->5136/tcp,*:5144->5144/tcp,*:5180->5180/tcp,*:5212->5212/tcp,*:5232->5232/tcp,*:5101->5101/tcp,*:5119->5119/tcp,*:5178->5178/tcp,*:5200->5200/tcp,*:5222->5222/tcp,*:5226->5226/tcp,*:5227->5227/tcp,*:5299->5299/tcp,*:5104->5104/tcp,*:5135->5135/tcp,*:5254->5254/tcp,*:5298->5298/tcp,*:5151->5151/tcp,*:5194->5194/tcp
x9zio659iilb        neon-cluster-manager   replicated          1/1                 neoncluster/neon-cluster-manager:latest

Output of docker version:

Client:
 Version:      17.05.0-ce
 API version:  1.29
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Fri May  5 15:36:11 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Fri May  5 15:36:11 2017
 OS/Arch:      linux/amd64
 Experimental: true

[RFC] Add a command to backup/restore Swarm state

In reaction to moby/moby#33462, I was discussing with @diogomonica that if we consider the secrets store in swarm to be the "source of truth" (i.e., not just a "copy" of secrets stored/managed elsewhere), that there should be an easy way to backup and restore the swarm state, without users having to dive into /var/lib/docker/swarm to create an archive.

SwarmKit does have a tool to dump this data https://github.com/docker/swarmkit/blob/master/cmd/swarm-rafttool/dump.go, but AFAIK, this is not exposed through the docker CLI

Use-cases;

  • backing up swarm state
  • support (sending swarm state for investigation)
  • migrating a cluster to new hardware / disaster recovery

Given that that directory contains both the data, and keys to decrypt the data, perhaps keys and data should be exported separately, so that data can be backed-up, and transported safely

/cc @aaronlehmann

docker rmi -f should always exit 0

I used 'docker rmi -f' in a script and expected it to behave like rm 's -f: If the container (or file) does not exist, still exit cleanly. For another example, mkdir -p is handy in shell scripts to idempotently make a directory tree whether or not it already exists, avoiding additional code to handle such cases.

For example:

$ docker rmi -f non-existant-image
Error response from daemon: No such image: non-existant-image:latest
$ echo $?
1
$ rm -f non-existant-file
$ echo $?
0

I'm not sure about the scope of this request (perhaps it also applies to docker rm), or the validity, but I did hear at Dockercon that you were interested in improving the developer experience no matter how small - and this would improve mine a little bit. Otherwise, I need to run "docker rmi -f imagename || true", or wrap it in a second docker call to make it conditional.

CLI should honor API version header when _ping returns unsuccessful status code

When _ping returns a non-200 HTTP status code, the CLI defaults to the latest API version. This can result in errors like:

your client is too new (API version 1.27). The newest supported API version is 1.26.0

...when the CLI moves on to performing the actual operation.

The CLI should still honor the version header, so _ping returning a bad status code doesn't prevent the CLI from working at all. Right now the client's Ping method just returns an error, but it should still return the version information in this case. Maybe the HTTP status code should be moved to a field on types.Ping instead of triggering an error.

I ran into this issue running the CLI against Docker Datacenter. Its _ping endpoint was returning a 500 status code because the cluster was considered unhealthy. But this caused the CLI to use a newer API version than was supported, so CLI commands didn't work at all while the cluster was in this state.

Vendor Usage

Hello,
I originally had a application that was using github.com/docker/docker/cli/command to create a DockerCli and then handle login to the private docker registry by creating a types.AuthConfig struct and passing it to dockerCli.CredentialsStore(authConfig.ServerAddress).Store(authConfig) but now that you have changed the types.AuthConfig to a vendor instead I am no longer able to utilize the dockerCli.CredentialsStore is there something I'm doing wrong?

Unused packages in vendor directory

The vendor directory has github.com/docker/docker-credential-helpers/secretservice and github.com/docker/docker-credential-helpers/osxkeychain, but these don't appear to be imported anywhere. We should investigate if this is a bug in vndr.

cc @tiborvass

Add support to forward TLS passphrase to go-connections

the go-connections library recently included an option to set TLS passphrase when decrypting private keys (docker/go-connections#35). It'd be nice if this option could be added to the CLI as well so users don't have to type their passphrase each time they want to execute a command with an encrypted key.

I can send the PR if you think this should go in. I propose the flag to be --tlspassphrase

@vdemeester WDYT?

Error message (caused by uppercase letter in name argument to "docker tag") does not mention casing

When I tried to use docker tag to specify my registry and image name, I got the following error:

docker tag 08245 registry.example.org/foo/screenApp
Error parsing reference: "registry.example.org/foo/screenApp" is not a valid repository/tag

I figured out what the problem was (uppercase letter in name component) and saw that others had the same problem.

Expected result for the error situation is that the error message mentions the wrongful presence of uppercase letters.

docker service ls/ps is not truncating digests anymore

Here's an example from (more or less) master:

$ docker service create --name atest alpine sleep 1000
dhbc2sl1p79s95eqj9ps8chrk
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.

$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                                                                              PORTS
dhbc2sl1p79s        atest               replicated          1/1                 docker.io/library/alpine@sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96

$ docker service ps atest
ID                  NAME                IMAGE                                                                                              NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
oibpq5yjwda8        atest.1             docker.io/library/alpine@sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96   7766acd77ab4        Running             Running 9 seconds ago
root@7766acd77ab4:/go/src/github.com/docker/docker#

This was implemented in moby/moby#28539. I think this is a regression, possibly related to distribution/distribution#2142

cc @dmcgowan

Docker CLI is trying to use Docker Registry API V1 instead of V2

Hi,

I'm aware that my issue is probably not a big deal, but still, I can't find any documentation or people who seems to encounter this problem. So I'm posting here, after few hours of testing and research.

Note: ATM I'm trying this without TLS at all to simplify the process.
Note 2 : My private registry is added to the insecure registries and I disabled TLS verification, on my client-side Docker instance.

Description of my issue :
I have a private Docker Registry v2 on my local network, and I want to push an image from my local computer to this registry.

I do:
> docker tag my-image 172.19.x.x:5000/myimage-repo
And then:
> docker push 172.19.x.x:5000/myimage-repo

I got this output:
Put http://172.19.x.x:5000/v1/repositories/myimage-repo/: dial tcp 172.19.x.x:5000: getsockopt: no route to host

I tried to play a bit with url, but my Docker client-side, is trying to connect to reach something behind V1/repositories.

Versions & Specs :
I'm on Windows 10 and using Docker v17.03. The registry (v2.6.0) is a Docker container running on a VM.

How can I tell to Docker to reach registry v2 ? Thanks a lot,

[RFC] Add "docker service rollback" subcommand

docker 1.13 added a --rollback flag to docker service update (moby/moby#26421) to allow manually rolling back a service to a previous version.

This flag cannot be combined with other flags on the service update subcommand, which is confusing (moby/moby#33444)

Looking at this functionality, I think this flag is essentially a subcommand "disguised" as a flag, and I suggest to add a rollback subcommand;

docker service rollback <service-name|service-id>

which would be the equivalent of

docker service update --rollback <service-name|service-id>

Given that this functionality is already present in the API, this can be implemented as a cli-only change.

ping @aaronlehmann @dnephin @tiborvass

Feedback for https://github.com/docker/cli/blob/master/docs/reference/commandline/commit.md

Problem description

Syntax for commit --change should be further elaborated in regard to setting the Env value. The example given works for setting one Env environment variable, though it seems to be additive only. Can one set multiple Env variables? Can one delete an Env variable? Can one completely overwrite all of Env? In addition, another syntax seems to work that includes square brackets. See http://stackoverflow.com/questions/29015023/docker-commit-created-images-and-entrypoint and the answer by user: sxc731. This syntax could be elaborated upon, unless using it is not best practice.

Problem location

https://docs.docker.com/engine/reference/commandline/commit/
Aparently this references:
https://github.com/docker/cli/blob/master/docs/reference/commandline/commit.md

Project version(s) affected

latest published version

Suggestions for a fix

See above

Would be nice to have a `docker stack deploy --dry-run` option

Because it's not always obvious what state an existing stack is in when a docker stack deploy command is issued, it would be useful to be able to see a plan of action before it's executed to prevent surprises.

I'm proposing --dry-run as the option name.

"docker stack rm" results in "page not found"

I'm guessing this is related to #162

I'm using a master build which I think has the fix for #162

C:\Users\friism>docker version
Client:
 Version:      17.06.0-dev
 API version:  1.29 (downgraded from 1.30)
 Go version:   go1.8.3
 Git commit:   f82f61e
 Built:        Sun Jun 11 23:59:54 2017
 OS/Arch:      windows/amd64

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:10:54 2017
 OS/Arch:      linux/amd64
 Experimental: false
C:\code\atsea-sample-shop-app>docker service ls
ID                  NAME                       MODE                REPLICAS            IMAGE                             PORTS
9sr4ophfuwmm        atsea_reverse_proxy        replicated          0/1                 friism/reverse_proxy:latest
hwcgsqn3kio3        atsea_database             replicated          1/1                 friism/atsea_db:latest            *:5432->5432/tcp
ov06vkl99x6c        dockercloud-server-proxy   global              1/1                 dockercloud/server-proxy:latest   *:2376->2376/tcp
s45iwlnfbskp        atsea_payment_gateway      replicated          1/1                 friism/payment_gateway:latest
w8v9yf4aunyv        atsea_appserver            replicated          1/1                 friism/atsea_app:latest           *:5005->5005/tcp,*:8080->8080/tcp

C:\code\atsea-sample-shop-app>docker stack ls
NAME                SERVICES
atsea               4

C:\code\atsea-sample-shop-app>docker stack rm atsea
Error response from daemon: page not found

version: "3.1"

services:
  reverse_proxy:
    build: ./reverse_proxy
    image: friism/reverse_proxy

  database:
    build:
       context: ./database
    image: friism/atsea_db
    environment:
      POSTGRES_USER: gordonuser
      POSTGRES_DB_PASSWORD_FILE: /run/secrets/postgres_password
      POSTGRES_DB: atsea
    ports:
      - "5432:5432"
    networks:
      - back-tier
    secrets:
      - postgres_password

  appserver:
    build:
       context: app
       dockerfile: Dockerfile
    image: friism/atsea_app
    ports:
      - "8080:8080"
      - "5005:5005"
    networks:
      - front-tier
      - back-tier
    secrets:
      - postgres_password

  payment_gateway:
    build:
      context: payment_gateway
    image: friism/payment_gateway
    networks:
      - payment
    secrets:
      - payment_token

networks:
  front-tier:
  back-tier:
  payment:
    driver: overlay

secrets:
  postgres_password:
    file: ./devsecrets/postgres_password
  payment_token:
    file: ./devsecrets/payment_token

C:\code\atsea-sample-shop-app>

Import moby/moby/integration-cli/docker_cli_help_test.go

This file only tests the CLI, so we should be able to import into this repo.

The test is a bit verbose, we should either change it to using golden files, or split it into separate test cases with test tables so it's easier to debug when tests fail.

Feature: TLS Expiry Notice

Description

We use TLS to communicate between our docker clients and servers. After a while we begin to take this for granted and forget about it. Sometimes a certificate will expire and then it will take us some time debugging to figure out what's going on.

Steps to reproduce the issue:

  1. Have TLS communication between docker client and server
  2. Have TLS certificate expire

Describe the results you received:

I received a message from docker where the only technical portion of the error was bad cert.

Describe the results you expected:

I expect that when a certificate is close to expiring, perhaps, 30 days out. That the client (or server) would warn me that there are only N days left before this certificate expires.

Improve makefile situation

A couple of issues with Makefiles in this repo:

The first issue is that they are full of @ output suppression. A lot of the targets execute for a long time and I can't see what is happening or if anything is stuck. Also docker build uses -q for some reason.

The other problem is the duplication of content in multiple Makefiles and aggressive use of mounts. Bind mounts break all cases when building against a remote API and they are almost never needed. By not using mounts you have similar performance and better caching. Similarily I see no reason to define these local duplicate targets that can easily go out of sync and have no guarantees of correctness as they rely on the state of host system. By looking at issues in this repo there are already multiple reports of these problems showing up.

My proposal is:

  • Replace Makefile with docker.Makefile
  • Add a build target that copies artifacts back to the host to keep the current behavior. GOOS can be sent with a build arg.
  • Same for vendor and go generate target
  • Build step should run as part of a Dockerfile not in a separate container. Target stages can be used to build without tests or get devenv.
  • Dev-env target can keep using the mounts(for now)
  • Separate Dockerfile for CI should not be needed. At least after ci supports multi-stage builds if we want to use some better syntax for the local version.

@dnephin

Error running make build

Upon a fresh clone, I get
./scripts/build/binary: 3: ./scripts/build/binary: source: not found trying to make build. Perhaps we should default to build in the container?

OS: Debian Sid
Kernel: 4.10.1
Shell: bash

[RFC] Strip "Error response from daemon:" prefix

Description

Whenever an error is coming from the "daemon", we prefix error messages with "Error response from daemon".

I realise this could help "us" quickly determine where the client is coming from (although in most cases, we probably know), the prefix doesn't add much for the user, and in fact may make the information that is important to them less visible.

We're also not consistent; some messages have the prefix, but others don't

$ docker inspect nosuchthing
[]
Error: No such object: nosuchthing


$ docker network create -d blablabla foo
Error response from daemon: legacy plugin: plugin not found

Describe the results you expected:

I suggest to remove/strip the Error response from daemon: and just print the error message that was returned (or a friendlier message generated in the CLI)

unit tests on commands

Moving initial docker/docker issue moby/moby#31217 here

This issue is to keep track of unit test for the cli (i.e. cli/commands package and sub-packages).

The swarm, node and volume packages are example on how to write those tests (but they probably can be enhance, and enhancement are always welcome ).

  • In order to write tests for these, you'll have to create more builder (cli/internal/test/builders) for objects that are not yet there.
  • You can also use golden files (see examples)
  • For each of these package, there is some integration tests (in integration-cli) that could be removed. I tried to mark them when I saw them, but it's not complete yet. I'll try to update this issue with a list of integration test that could be gone for each packages.

Service creation with duplicate network name on different scopes (one in local and one in swarm)

Description

In command line docker service create, the network name is now converted to network ID then passed to daemon (vs. done in the daemon previously):

func convertNetworks(ctx context.Context, apiClient client.NetworkAPIClient, networks opts.NetworkOpt) ([]swarm.NetworkAttachmentConfig, error) {
var netAttach []swarm.NetworkAttachmentConfig
for _, net := range networks.Value() {
networkIDOrName := net.Target
_, err := apiClient.NetworkInspect(ctx, networkIDOrName, false)
if err != nil {
return nil, err
}
netAttach = append(netAttach, swarm.NetworkAttachmentConfig{Target: net.Target, Aliases: net.Aliases, DriverOpts: net.DriverOpts})
}
sort.Sort(byNetworkTarget(netAttach))
return netAttach, nil
}

However, as convertNetworks uses inspect to do the network name conversion, a duplication error would be returned in case there are multiple network IDs available for the same network name.

In certain situations, it might be possible that there are multiple networks with the same name but with different scope, e.g, one foo in local and one foo in swarm.

For that I think it makes sense to change the way convertNetworks is done so that only networks related to swarm scope are searched.

Additional information you deem important (e.g. issue happens only occasionally):

This issue may need to be resolved in order to eventually fix network duplication issue in moby/moby/pull/30897 , moby/moby/issues/33561, moby/moby/issues/30242

`label` filter for `docker system prune` ignored by non-recent Docker hosts

label filter for docker system prune (and others like docker container prune) has been added in that commit: moby/moby@7025247

But using it targeting non-recent Docker hosts may be dangerous has this filter is not even considered.

It leads to situations like this one:

$ docker create --label foo=bar redis
5fb69165f8069420f34117c7ec1bc62b55c56cf1f9d9c148188bab1814b3d859
$ docker container prune --filter label=foo=nope
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
5fb69165f8069420f34117c7ec1bc62b55c56cf1f9d9c148188bab1814b3d859

The CLI should display return an error when using that filter if remote API version to 1.29.

`docker run` can't rely on the Events API.

Moved from moby/moby#32242

Description

Since Docker 1.13, the implementation of docker run ... changed to use the Events API over ContainerWait to wait for the container to exit. This was done to facilitate the move of handling autoRemove on the daemon rather than the CLI - the problem with the Wait API being that it only waited for the container to exit, and not for the container to be removed which would cause race conditions if, for example, a successive CLI command tried to create a container with the same name and it had not yet been cleaned up by the daemon.

While there is no problem with this Events API approach when connecting directly to the docker daemon
which is running the container, I have seen several issue with it when the "daemon" is actually a Docker Swarm cluster (classic swarm, not swarmkit). One major problem is that swarm does not support event filtering at all, and so it ignores the container filter and ends up streaming back all cluster events. This means that your CLI may receive a "die" event for another container, causing your CLI to exit pre-maturely. Even if filtering were added to the swarm manager, there is yet another issue where its aggregated event stream is not reliable - it may temporarily lose the event stream from the engine which is running the target container but will not disconnect the client because the aggregate stream as a whole is still working. If the container exits during this short disconnection window, the events will never be sent to the CLI and it will hang forever. There is no way to guarantee that the event will be sent back to the client.

Make fails with `make: *** [binary] Error 1`

Make fails with the helpful error message of make: *** [binary] Error 1

A little debugging shows that https://github.com/docker/cli/blob/master/scripts/build/.variables#L7 is the issue.

On OSX:

$ date --utc --rfc-3339 ns
date: illegal option -- -
usage: date [-jnRu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ...
            [-f fmt date | [[[mm]dd]HH]MM[[cc]yy][.ss]] [+format]

I know that the README now shows the Dockerized make targets so this will only be an issue for people like me on OSX who didnt RTFM 😁

I guess it would be nice to be fixed or for the dockerized makefile to be the default one.

Enable trust pinning with docker content trust

Description
The current version of notary currently support two types of trust pinnings: 1) certificate pinning that pins to a specific certificate and 2) CA pinning that uses a provided certificate of a trusted CA to validate the leaf certificate in the metadata.

Having the ability to pin to a trusted certificate/CA is extremely important as the current TUFUs model does not prevent MIMA for those who is pulling from the repository for the very first time.

At the moment, this feature seems to be disabled as an empty trust pin config is being passed into client.NewNotaryRepository().

`--help` doesn't work if daemon is not available

C:\Users\friism                                                                          
λ  docker deploy --help                                                                  
docker deploy is only supported on a Docker daemon with experimental features enabled    
C:\Users\friism                                                                          
λ  docker version                                                                        
Client:                                                                                  
 Version:      17.05.0-ce                                                                
 API version:  1.29                                                                      
 Go version:   go1.7.5                                                                   
 Git commit:   89658be                                                                   
 Built:        Thu May  4 21:43:09 2017                                                  
 OS/Arch:      windows/amd64                                                             
Error response from daemon: i/o timeout 

Speaking with @dnephin, he thinks it's related to --experimental

Ideally I'd be able to get help from the CLI without having access to the daemon.

Memo per image

I wasn't sure where to open this feature request, it might be more appropriately placed in moby/*

I'd like the ability to add a comment/description to new image layers on every commit. Something like a commit message in a versioning system.

So I should be able to do something like

docker commit <running_container> <image_to_be_saved> -m "This new layer adds <thing1> and <thing2>

and when I do something like

docker image <name> log

I would be able to see my commit messages.

Add CheckRedirect for GO 1.8

We need to make sure that we get the fix from moby/moby#32127 if we want to build the client with GO 1.8.

The changes in that PR will not affect the cli because a client is created in cli/command/cli.go, so we need to use the CheckRedirect function from that PR where we create the client.

Support for docker stack run is missing

Just like a docker-compose run <service> rake db:migrate with the whole environment, dependencies and services started (postgres, redis, etc...), it would be very use full to run a container within a deployed stack docker stack run <stack_name> <service> rake db:migrate

The workaround would be to ssh the correct node and to run an docker exec on the right container. This is not friendly and can be a source of errors.

--health-start-period value isn't shown when pretty inspecting service

Description

When I set the --health-start-period in a service, I cannot see the set value using docker service inspect --pretty,

Steps to reproduce the issue:

  1. docker service update foo --health-start-period 2m
  2. docker service inspect --pretty foo | grep -i health

Describe the results you received:

No results while looking for an "Health Start Period value"

Describe the results you expected:

I think this is an important value that should appear in the prettified output too by default. I'm aware that the whole JSON inspect output carries indeed this info under

                "ContainerSpec": {
                    "Healthcheck": {
                        "StartPeriod": 120000000000
                    },

Output of docker version:

Client:
 Version:      17.05.0-ce
 API version:  1.29
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:06:25 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:06:25 2017
 OS/Arch:      linux/amd64
 Experimental: false

[17.06] Support --network=host and --network=bridge in stack deploy

With 17.06, moby supports --network=host and --network=bridge in docker service create.
But when used in a stack file, it fails due to a strict check in the client.

$ cat host.yml
networks:
  host:
    external: true
services:
  test:
    image: mrjana/simpleweb:latest
    networks:
      host : null
    command: simpleweb
version: '3.0'
volumes: {}


$ docker stack deploy --compose-file=host.yml testh
network "host" is declared as external, but it is not in the right scope: "local" instead of "swarm"

I think we should relax this check and let the engine fail for any such invalid external network configuration.

cc @dnephin wdyt ?

Add --pretty option to "secret inspect" and "config inspect"

Other "inspect" subcommands have a --pretty option. secret inspect and config inspect appear to be missing it. This is important because otherwise there's no human-readable way to get information about a config or secret object. It's particularly necessary for config inspect so the config payload can be exposed in cleartext, instead of the base64-encoded form that gets spit out over the JSON API.

cc @dhiltgen

build cli dynamically linked

need ability to compile the cli as dynamically linked so it can be packaged as rpm or deb with dependency on other rpms and debs

containerd problems after OOM

Description
When OOM killed process inside container, container stays in docker ps output. But it can't be accessed by docker exec command. Also docker stop not working.

Steps to reproduce the issue:

  1. Run container
  2. Allocate sufficient memory
  3. Wait for OOM message in log
  4. Try to interact with container

Describe the results you received:

# docker ps
CONTAINER ID   IMAGE  COMMAND   CREATED   STATUS    PORTS     NAMES
cddb83de5cf9       cu:1.1.1   "convert.sh"    4 hours ago    Up 4 hours CU

# docker exec -it cddb83de5cf9 whoami
rpc error: code = 2 desc = containerd: container not found

Describe the results you expected:
I expect that container would disappear from docker ps

Additional information you deem important:
issue happens only occasionally

Output of docker version:

Client:
 Version:      17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:05:44 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.1-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:05:44 2017
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

# docker info
Containers: 43
 Running: 43
 Paused: 0
 Stopped: 0
Images: 20
Server Version: 17.03.1-ce
Storage Driver: overlay
 Backing Filesystem: xfs
 Supports d_type: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-514.10.2.el7.x86_64
Operating System: Scientific Linux 7.3 (Nitrogen)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.64 GiB
Name: webbox6
ID: HYHP:WBBD:LBM4:4J2X:NDE5:ALGF:4XDG:H3JQ:RPMY:JDDI:Z57T:RBYB
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Similar issue moby/moby#33192

Fetch the Image ID of docker plugin

I am not sure how I can fetch the plugin image ID of a docker plugin which should match the config.digest field in the manifest stored in the registry

For docker images once I build the image and push the image to the registry, the Image ID found when I run this command "docker images $Image_Name" exactly matches the config.digest field in the manifest for that image in the registry since Image ID's are now content addressable SHA's.

I want to do the same thing for docker plugin. I create a plugin with the "docker plugin create" command and push the plugin to the registry. Docker plugin create/install command creates a container and run's the plugin in it and now there is no way to get the Image ID of the plugin to compare with the config.digest of the manifest in the registry.

Is there a way to do this or am I missing something? I am using docker client 1.13 and the latest registry using schemaversion2. This is helpful for our signing service where we allow docker pull for only signed/valid images or plugin.

Thanks

No response when running docker exec on Windows Server 2008

Description
On Windows Server 2008 I am trying to run a standard Ubuntu container and then login from the CLI via the docker exec command. This command fails/takes forever.

Steps to reproduce the issue:

  1. Start Windows Server 2008
  2. Install current Docker Toolbox
  3. docker run -it ubuntu
  4. In different shell: docker exec <container_name> bash

Describe the results you received:
The docker exec command hangs forever and does not show a shell.

Describe the results you expected:
Seeing a shell.

Additional information you deem important (e.g. issue happens only occasionally):
Output of docker-machine ls:

NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
default   *        virtualbox   Running   tcp://192.168.99.100:2376           v17.05.0-ce

docker-machine ssh works. Performing the command inside the machine gives me the same behavior.

Output of docker version:

Client:
 Version:      17.05.0-ce
 API version:  1.29
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Fri May  5 15:36:11 2017
 OS/Arch:      windows/amd64

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 21:43:09 2017
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

Containers: 9
 Running: 1
 Paused: 0
 Stopped: 8
Images: 26
Server Version: 17.05.0-ce
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 82
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.4.66-boot2docker
Operating System: Boot2Docker 17.05.0-ce (TCL 7.2); HEAD : 5ed2840 - Fri May  5 21:04:09 UTC 2017
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.955GiB
Name: default
ID: ZGZY:CDLZ:M2GR:4M33:SA7Z:RF2Z:OZCA:7Z6X:Y6Y7:FF6Z:VZBP:T4JX
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 42
 Goroutines: 60
 System Time: 2017-06-02T00:07:06.35199767Z
 EventsListeners: 2
Registry: https://index.docker.io/v1/
Labels:
 provider=virtualbox
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):
Windows Server 2008 using Virtualbox

Add support for --health-start-period in Compose/Stack file

Description

With Docker CE 17.05 there is the brand new --health-start-period option for a service, implemented with moby/moby#28938. But there is no such option available in a Compose/Stack file, so we can properly manage it with a YAML file.

I think I'm not qualified to propose the key name that should eventually implemented in theCompose file but I'd say that start_period, under healthcheck could be a good name.

build natively on the cross compile architectures

Description

Would like the ability to build natively on the cross compile architectures, i.e. arm, amd64, s390x, ppc64le.

Steps to reproduce the issue:

On an arm machine:

$ make -f docker.Makefile binary

Describe the results you received:

Dockerfile used to build the binary is x86_64 only:

panic: standard_init_linux.go:175: exec user process caused "exec format error" [recovered]
	panic: standard_init_linux.go:175: exec user process caused "exec format error"

Describe the results you expected:

A valid native binary in the build dir.

Additional information you deem important (e.g. issue happens only occasionally):

Workaround for now is to cross-compile the other architectures on an x86_64 machine:

$ make -f docker.Makefile cross

Azure Contianer Registry login scenario discussion

Hi Docker team:

I am opening this issue as a channel to discuss with docker team the best way forward with the Azure scenario. This issue is an extension of a closed PR. #105

To reiterate, we want to enable to user to take advantage of our AAD device login feature for docker. We are also hoping that the user would not need to install azure cli for this scenario. Namely, we want to user to be able to call "docker login" to login to an Azure Container Registry, but the login process would be Azure specific.

Following suggestions from @friism, I created an azure credential helper prototype which can be wrapped around any credential helper. https://github.com/shhsu/docker-acr-cred-helper/blob/master/program.go

In order for this component to work properly though, there are 3 changes that need to be made on the docker cli side:

  1. Because azure credential helper interact with the user and prompt them to perform device login, we need to pipe the stderr to docker cli (docker/docker-credential-helpers#64)
  2. Azure credential helper should freely support any credential helper as its backend for storing the credentials. Since information can be written to the active docker config file, we need a way for the credential helper to know which config file is currently active. We could do this by setting the current cli config location in a DOCKER_CONFIG variable in the shell session when we invoke the credential helper
  3. This is the most intrusive change: https://github.com/shhsu/cli-1/blob/v2_credhelper/cli/command/registry.go#L86-L103. When given no -p and -u during login, ConfigureAuth method currently always prompt user for username and password. The username retrieved from the cred store is only used as the default value during prompt and password is not used. This will not work for us because our credential helper has already produced a pair of username and password and prompting again breaks our scenario.

We'll like to open a conversation with docker team and see if you guys are open to allow Azure Container team make these 3 changes.

Alternatively, we are also thinking of introduce a new component of docker called docker-login-manager. Similar to docker-credential-helper, this login manager would be a plugin component. However, the only active that this login manager would perform is to get the AuthConfig object. The ConfigureAuth method would first go to the user configured login manager to login. If for any reason the login manager fails to retrieve the username and password, cli would then step in and prompt for username and password.

Please let us know what's the most suitable path forward.

Thanks
Peter

TTY emulator broken for 17.06-rc2 on Windows

Description

I just switched from the stable 17.03 community release to edge 17.06-rc2 on Windows, The TTY emulator doesn't appear to be converting LFs from the container into CR-LFs on the Windows side.

This seems like a critical problem that will break a lot of things.

Steps to reproduce the issue:

  1. Install 17.06-rc2 on Windows
  2. Start a container: docker run -it --rm alpine sh
  3. Press the ENTER key a few times.

Describe the results you received:

Note how the Linux prompt is indented after each.

C:\src\NeonForge\Images\neon-cli>docker run -it --rm alpine sh
/ #
    / #
        / #
            / #
                / #
                    / #
                        / #
                            / #

Describe the results you expected:

Expected to see CRLFs so each new line starts at column 0 in my window.,

Output of docker version:

Client:
 Version:      17.06.0-ce-rc2
 API version:  1.30
 Go version:   go1.8.1
 Git commit:   402dd4a
 Built:        Wed Jun  7 10:01:32 2017
 OS/Arch:      windows/amd64

Server:
 Version:      17.06.0-ce-rc2
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   402dd4a
 Built:        Wed Jun  7 10:02:04 2017
 OS/Arch:      linux/amd64
 Experimental: true

Output of docker info:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 153
Server Version: 17.06.0-ce-rc2
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 167
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 3addd840653146c90a254301d6c3a663c7fd6429
runc version: 992a5be178a62e026f4069f443c6164912adbf09
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.30-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.934GiB
Name: moby
ID: UQXS:6ZYI:NP6F:3IN5:AKMV:KUAF:FJDX:B3BS:P2PQ:PFWT:OPDQ:6OFQ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Stack deploy fails with 17.06.0-rc1 client against 17.05.0 swarm

Description

I'm having some difficultly deploying a simple stack to a swarm running Docker 17.05.0 from a 17.06.0-rc1 client. I get Error response from daemon: page not found.

Steps to reproduce the issue:

Given the following Compose file:

version: '3.2'

services:
  mssql:
    image: microsoft/mssql-server-linux:ctp2-1
    environment:
      - ACCEPT_EULA=Y
      - SA_PASSWORD=converis
    networks:
      - data

networks:
  data:
  1. Set DOCKER_HOST to point to a Docker 17.05.0 swarm controller.
  2. Run docker stack deploy -c stack.yml dm.

Describe the results you received:

The deploy fails with the following output.

(swarm) →  dm git:(review-stack) ✗ docker -D stack deploy -c stack.yml dm
DEBU[0000] Trusting 1 certs                             
DEBU[0000] Trusting 1 certs                             
Creating network dm_data
service mssql: Error response from daemon: page not found

Describe the results you expected:

The data network and mssql service should be created.

Additional information you deem important (e.g. issue happens only occasionally):

Using the same Docker 17.06.0-rc1 client against a Docker 17.06.0-rc1 swarm works fine. As does using a Docker 17.05.0 client against the Docker 17.05.0 swarm. I only get the error going from the 17.06.0-rc1 client to the 17.05.0 swarm.

Output of docker version:

(swarm) →  dm git:(review-stack) ✗ docker version
Client:
 Version:      17.06.0-ce-rc1
 API version:  1.29 (downgraded from 1.30)
 Go version:   go1.8.1
 Git commit:   7f8486a
 Built:        Wed May 31 02:56:01 2017
 OS/Arch:      darwin/amd64

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:06:25 2017
 OS/Arch:      linux/amd64
 Experimental: true

Output of docker info:

(swarm) →  dm git:(review-stack) ✗ docker info
Containers: 23
 Running: 20
 Paused: 0
 Stopped: 3
Images: 50
Server Version: 17.05.0-ce
Storage Driver: devicemapper
 Pool Name: docker-thinpool
 Pool Blocksize: 524.3kB
 Base Device Size: 10.74GB
 Backing Filesystem: xfs
 Data file: 
 Metadata file: 
 Data Space Used: 55.15GB
 Data Space Total: 65.28GB
 Data Space Available: 10.12GB
 Metadata Space Used: 17.87MB
 Metadata Space Total: 683.7MB
 Metadata Space Available: 665.8MB
 Thin Pool Minimum Free Space: 6.527GB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Library Version: 1.02.135-RHEL7 (2016-11-16)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: 
Swarm: active
 NodeID: svhio5na7zjds72nnw55gjrwh
 Is Manager: true
 ClusterID: 1dbs7dgdqmz5u8h2bncrzmh88
 Managers: 4
 Nodes: 4
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Root Rotation In Progress: false
 Node Address: 10.43.64.10
 Manager Addresses:
  10.43.64.10:2377
  10.43.64.11:2377
  10.43.64.12:2377
  10.43.64.13:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-514.16.1.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.51GiB
Name: itpocnode01.ucalgary.ca
ID: OM7K:NNN7:75VJ:I34C:W5ZS:TCYH:ZGPH:SRXP:HINS:7O2W:IZQ4:QIGD
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):

Client is using Docker for Mac (edge). 17.05.0 swarm is 4 RHEL 7.3 VMs. 17.06.0-rc1 is Docker for Mac (edge) in swarm mode.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.