Code Monkey home page Code Monkey logo

gradle-docker-compose-plugin's Introduction

gradle-docker-compose-plugin Build Version

Simplifies usage of Docker Compose for local development and integration testing in Gradle environment.

composeUp task starts the application and waits till all containers become healthy and all exposed TCP ports are open (so till the application is ready). It reads assigned host and ports of particular containers and stores them into dockerCompose.servicesInfos property.

composeDown task stops the application and removes the containers, only if 'stopContainers' is set to 'true' (default value).

composeDownForced task stops the application and removes the containers.

composePull task pulls and optionally builds the images required by the application. This is useful, for example, with a CI platform that caches docker images to decrease build times.

composeBuild task builds the services of the application.

composePush task pushes images for services to their respective registry/repository.

composeLogs task stores logs from all containers to files in containerLogToDir directory.

Quick start

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath "com.avast.gradle:gradle-docker-compose-plugin:$versionHere"
    }
}

apply plugin: 'docker-compose'

// Or use the new Gradle Portal plugins (then you don't have to add the dependency as above):
// plugins {
//  id 'com.avast.gradle.docker-compose' version "$versionHere"
// }

dockerCompose.isRequiredBy(test)
  • docker-compose up is executed in the project directory, so it uses the docker-compose.yml file.
  • If the provided task (test in the example above) executes a new process then environment variables and Java system properties are provided.
    • The name of environment variable is ${serviceName}_HOST and ${serviceName}_TCP_${exposedPort} (e.g. WEB_HOST and WEB_TCP_80).
    • The name of Java system property is ${serviceName}.host and ${serviceName}.tcp.${exposedPort} (e.g. web.host and web.tcp.80).
    • If the service is scaled then the serviceName has _1, _2... suffix (e.g. WEB_1_HOST and WEB_1_TCP_80, web_1.host and web_1.tcp.80).
      • Please note that in Docker Compose v2, the suffix contains - instead of _

Why to use Docker Compose?

  1. I want to be able to run my application on my computer, and it must work for my colleagues as well. Just execute docker compose up and I'm done - e.g. the database is running.
  2. I want to be able to test my application on my computer - I don't wanna wait till my application is deployed into dev/testing environment and acceptance/end2end tests get executed. I want to execute these tests on my computer - it means execute docker compose up before these tests.

Why this plugin?

You could easily ensure that docker compose up is called before your tests but there are few gotchas that this plugin solves:

  1. If you execute docker compose up -d (detached) then this command returns immediately and your application is probably not able to serve requests at this time. This plugin waits till all containers become healthy and all exported TCP ports of all services are open.
    • If waiting for healthy state or open TCP ports timeouts (default is 15 minutes) then it prints log of related service.
  2. It's recommended not to assign fixed values of exposed ports in docker-compose.yml (i.e. 8888:80) because it can cause ports collision on integration servers. If you don't assign a fixed value for exposed port (use just 80) then the port is exposed as a random free port. This plugin reads assigned ports (and even IP addresses of containers) and stores them into dockerCompose.servicesInfo map.
  3. There are minor differences when using Linux containers on Linux, Windows and Mac, and when using Windows Containers. This plugin handles these differences for you so you have the same experience in all environments.

Usage

The plugin must be applied on project that contains docker-compose.yml file. It supposes that Docker Engine and Docker Compose are installed and available in PATH.

Starting from plugin version 0.17.6, Gradle 6.1 is required, because Task.usesService() is used.

Starting from plugin version 0.17.0, useDockerComposeV2 property defaults to true, so the new docker compose (instead of deprecated docker-compose is used).

Starting from plugin version 0.10.0, Gradle 4.9 or newer is required (because it uses Task Configuration Avoidance API).

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath "com.avast.gradle:gradle-docker-compose-plugin:$versionHere"
    }
}

apply plugin: 'docker-compose'

dockerCompose.isRequiredBy(test) // hooks 'dependsOn composeUp' and 'finalizedBy composeDown', and exposes environment variables and system properties (if possible)

dockerCompose {
    useComposeFiles = ['docker-compose.yml', 'docker-compose.prod.yml'] // like 'docker-compose -f <file>'; default is empty
    startedServices = ['web'] // list of services to execute when calling 'docker-compose up' or 'docker-compose pull' (when not specified, all services are executed)
    scale = [${serviceName1}: 5, ${serviceName2}: 2] // Pass docker compose --scale option like 'docker-compose up --scale serviceName1=5 --scale serviceName2=2'
    forceRecreate = false // pass '--force-recreate' and '--renew-anon-volumes' when calling 'docker-compose up' when set to 'true`
    noRecreate = false // pass '--no-recreate' when calling 'docker-compose up' when set to 'true`
    buildBeforeUp = true // performs 'docker-compose build' before calling the 'up' command; default is true
    buildBeforePull = true // performs 'docker-compose build' before calling the 'pull' command; default is true
    ignorePullFailure = false // when set to true, pass '--ignore-pull-failure' to 'docker-compose pull'
    ignorePushFailure = false // when set to true, pass '--ignore-push-failure' to 'docker-compose push'
    pushServices = [] // which services should be pushed, if not defined then upon `composePush` task all defined services in compose file will be pushed (default behaviour)
    buildAdditionalArgs = ['--force-rm']
    pullAdditionalArgs = ['--ignore-pull-failures']
    upAdditionalArgs = ['--no-deps']
    downAdditionalArgs = ['--some-switch']
    composeAdditionalArgs = ['--context', 'remote', '--verbose', "--log-level", "DEBUG"] // for adding more [options] in docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]

    waitForTcpPorts = true // turns on/off the waiting for exposed TCP ports opening; default is true
    waitForTcpPortsTimeout = java.time.Duration.ofMinutes(15) // how long to wait until all exposed TCP become open; default is 15 minutes
    waitAfterTcpProbeFailure = java.time.Duration.ofSeconds(1) // how long to sleep before next attempt to check if a TCP is open; default is 1 second
    tcpPortsToIgnoreWhenWaiting = [1234] // list of TCP ports what will be ignored when waiting for exposed TCP ports opening; default: empty list
    waitForHealthyStateTimeout = java.time.Duration.ofMinutes(15) // how long to wait until a container becomes healthy; default is 15 minutes
    waitAfterHealthyStateProbeFailure = java.time.Duration.ofSeconds(5) // how long to sleep before next attempt to check healthy status; default is 5 seconds
    checkContainersRunning = true // turns on/off checking if container is running or restarting (during waiting for open TCP port and healthy state); default is true

    captureContainersOutput = false // if true, prints output of all containers to Gradle output - very useful for debugging; default is false
    captureContainersOutputToFile = project.file('/path/to/logFile') // sends output of all containers to a log file
    captureContainersOutputToFiles = project.file('/path/to/directory') // sends output of all services to a dedicated log file in the directory specified, e.g. 'web.log' for service named 'log'
    composeLogToFile = project.file('build/my-logs.txt') // redirect output of composeUp and composeDown tasks to this file; default is null (ouput is not redirected)
    containerLogToDir = project.file('build/logs') // directory where composeLogs task stores output of the containers; default: build/containers-logs
    includeDependencies = false // calculates services dependencies of startedServices and includes those when gathering logs or removing containers; default is false

    stopContainers = true // doesn't call `docker-compose down` if set to false - see below the paragraph about reconnecting; default is true
    removeContainers = true // default is true
    retainContainersOnStartupFailure = false // if set to true, skips running ComposeDownForced task when ComposeUp fails - useful for troubleshooting; default is false
    removeImages = com.avast.gradle.dockercompose.RemoveImages.None // Other accepted values are All and Local
    removeVolumes = true // default is true
    removeOrphans = false // removes containers for services not defined in the Compose file; default is false
    
    projectName = 'my-project' // allow to set custom docker-compose project name (defaults to a stable name derived from absolute path of the project and nested settings name), set to null to Docker Compose default (directory name)
    projectNamePrefix = 'my_prefix_' // allow to set custom prefix of docker-compose project name, the final project name has nested configuration name appended
    executable = '/path/to/docker-compose' // allow to set the base Docker Compose command (useful if not present in PATH). Defaults to `docker-compose`. Ignored if useDockerComposeV2 is set to true.
    useDockerComposeV2 = true // Use Docker Compose V2 instead of Docker Compose V1, default is true. If set to true, `dockerExecutable compose` is used for execution, so executable property is ignored.
    dockerExecutable = '/path/to/docker' // allow to set the path of the docker executable (useful if not present in PATH)
    dockerComposeWorkingDirectory = project.file('/path/where/docker-compose/is/invoked/from')
    dockerComposeStopTimeout = java.time.Duration.ofSeconds(20) // time before docker-compose sends SIGTERM to the running containers after the composeDown task has been started
    environment.put 'BACKEND_ADDRESS', '192.168.1.100' // environment variables to be used when calling 'docker-compose', e.g. for substitution in compose file
}

test.doFirst {
    // exposes "${serviceName}_HOST" and "${serviceName}_TCP_${exposedPort}" environment variables
    // for example exposes "WEB_HOST" and "WEB_TCP_80" environment variables for service named `web` with exposed port `80`
    // if service is scaled using scale option, environment variables will be exposed for each service instance like "WEB_1_HOST", "WEB_1_TCP_80", "WEB_2_HOST", "WEB_2_TCP_80" and so on
    dockerCompose.exposeAsEnvironment(test)
    // exposes "${serviceName}.host" and "${serviceName}.tcp.${exposedPort}" system properties
    // for example exposes "web.host" and "web.tcp.80" system properties for service named `web` with exposed port `80`
    // if service is scaled using scale option, environment variables will be exposed for each service instance like "web_1.host", "web_1.tcp.80", "web_2.host", "web_2.tcp.80" and so on
    dockerCompose.exposeAsSystemProperties(test)
    // get information about container of service `web` (declared in docker-compose.yml)
    def webInfo = dockerCompose.servicesInfos.web.firstContainer
    // in case scale option is used, dockerCompose.servicesInfos.containerInfos will contain information about all running containers of service. Particular container can be retrieved either by iterating the values of containerInfos map (key is service instance name, for example 'web_1')
    def webInfo = dockerCompose.servicesInfos.web.'web_1'
    // pass host and exposed TCP port 80 as custom-named Java System properties
    systemProperty 'myweb.host', webInfo.host
    systemProperty 'myweb.port', webInfo.ports[80]
    // it's possible to read information about exposed UDP ports using webInfo.updPorts[1234]
}

Nested configurations

It is possible to create a new set of ComposeUp/ComposeBuild/ComposePull/ComposeDown/ComposeDownForced/ComposePush tasks using following syntax:

Groovy
dockerCompose {
    // settings as usual
    myNested {
        useComposeFiles = ['docker-compose-for-integration-tests.yml']
        isRequiredBy(project.tasks.myTask)
    }
}
  • It creates myNestedComposeUp, myNestedComposeBuild, myNestedComposePull, myNestedComposeDown, myNestedComposeDownForced and myNestedComposePush tasks.
  • It's possible to use all the settings as in the main dockerCompose block.
  • Configuration of the nested settings defaults to the main dockerCompose settings (declared before the nested settings), except following properties: projectName, startedServices, useComposeFiles, scale, captureContainersOutputToFile, captureContainersOutputToFiles, composeLogToFile, containerLogToDir, pushServices

When exposing service info from myNestedComposeUp task into your task you should use following syntax:

test.doFirst {
    dockerCompose.myNested.exposeAsEnvironment(test)
}
Kotlin
test.doFirst {
    dockerCompose.nested("myNested").exposeAsEnvironment(project.tasks.named("test").get())
}

It's also possible to use this simplified syntax:

dockerCompose {
    isRequiredByMyTask 'docker-compose-for-integration-tests.yml'
}

Reconnecting

If you specify stopContainers to be false then the plugin automatically tries to reconnect to the containers from the previous run instead of calling docker-compose up again. Thanks to this, the startup can be very fast.

It's very handy in scenarios when you iterate quickly and e.g. don't want to wait for Postgres to start again and again.

Because you don't want to check-in this change to your VCS, you can take advantage of this init.gradle initialization script (in short, copy this file to your USER_HOME/.gradle/ directory).

Usage from Kotlin DSL

This plugin can be used also from Kotlin DSL, see the example:

import com.avast.gradle.dockercompose.ComposeExtension
apply(plugin = "docker-compose")
configure<ComposeExtension> {
    includeDependencies.set(true)
    createNested("local").apply {
        setProjectName("foo")
        environment.putAll(mapOf("TAGS" to "feature-test,local"))
        startedServices.set(listOf("foo-api", "foo-integration"))
        upAdditionalArgs.set(listOf("--no-deps"))
    }
}

Tips

  • You can call dockerCompose.isRequiredBy(anyTask) for any task, for example for your custom integrationTest task.
  • If some Dockerfile needs an artifact generated by Gradle then you can declare this dependency in a standard way, like composeUp.dependsOn project(':my-app').distTar
  • All properties in dockerCompose have meaningful default values so you don't have to touch it. If you are interested then you can look at ComposeSettings.groovy for reference.
  • dockerCompose.servicesInfos contains information about running containers so you must access this property after composeUp task is finished. So doFirst of your test task is perfect place where to access it.
  • Plugin honours a docker-compose.override.yml file, but only when no files are specified with useComposeFiles (conform command-line behavior).
  • Check ContainerInfo.groovy to see what you can know about running containers.
  • You can determine the Docker host in your Gradle build (i.e. docker-machine start) and set the DOCKER_HOST environment variable for compose to use: dockerCompose { environment.put 'DOCKER_HOST', '192.168.64.9' }
  • If the services executed by docker-compose are running on a specific host (different than Docker, like in CirceCI 2.0), then SERVICES_HOST environment variable can be used. This value will be used as the hostname where the services are expected to be listening.
  • If you need to troubleshoot a failing ComposeUp task, set retainContainersOnStartupFailure to prevent containers from begin forcibly deleted. Does not override removeContainers, so if you run ComposeDown, it will not be affected.

gradle-docker-compose-plugin's People

Contributors

alenkacz avatar alexarana avatar alikhachev avatar anemortalkid avatar antonwiens avatar aschrijver avatar augi avatar breskeby avatar checketts avatar davidsoff avatar dependabot-preview[bot] avatar double16 avatar gsmirnov-splk avatar hszemi avatar jakubjanecek avatar jansyk13 avatar jdai8 avatar jeloba avatar joecotton-wk avatar kai-zhu avatar laurentleseigneur avatar lydiaralph avatar maciejdobrowolski avatar mkw avatar psxpaul avatar renovate-bot avatar renovate[bot] avatar simondwilliams avatar skn0tt avatar swalendzik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gradle-docker-compose-plugin's Issues

Waiting for health status behaviour is not sufficient

Hi,

at the moment the health status check / wait function returns if the status switches from "starting" to something else. This status also might be "unhealthy" for a while before switching to healthy. This can be observed for services that really need a long time to start (like jetty). This might be worked around with unreasonable high thresholds in the health check definition. It would be nice to see this fixed somehow.

Cheers,
Christian

servicesInfos is empty

Hi all,

I saw on other issues about the support for docker-compose v2 and that the serviceInfos is empty when there is yaml instead of yml docker-compose file but i think my case is different.

My docker-compose.yml is this

version: '2'
services:
  redis:
    image: redis:alpine
    volumes:
      - redis-data:/var/lib/redis
    ports:
      - "6379:6379"
volumes:
  redis-data:
    driver: local

Also tried with version 1

redis:
  image: redis:alpine
  ports:
    - "6379:6379"

My project is a multiproject with structure

.
├── admin-panel
│   ├── build
│   └── src
├── api
│   ├── build
│   └── src
├── build
│   └── classes
├── core
│   ├── build
│   ├── lib
│   └── src
├── domain
│   ├── build
│   └── src

My gragle build is this

dockerCompose.isRequiredBy(test)
    dockerCompose {
        //useComposeFiles = ['../docker-compose.yml'] // like 'docker-compose -f <file>'
        stopContainers = true                                              // useful for debugging
        removeContainers = true
        // removeImages = "None" // Other accepted values are: "All" and "Local"
        // removeVolumes = false
        //environment.put 'BACKEND_ADDRESS', '192.168.1.100' // Pass environment variable to 'docker-compose' for substitution in compose file
        environment local_environment
    }
...
test.doFirst {
        // exposes "${serviceName}_HOST" and "${serviceName}_TCP_${exposedPort}" environment variables
        // for example exposes "WEB_HOST" and "WEB_TCP_80" environment variables for service named `web` with exposed port `80`
        dockerCompose.exposeAsEnvironment(test)
        // exposes "${serviceName}.host" and "${serviceName}.tcp.${exposedPort}" system properties
        // for example exposes "web.host" and "web.tcp.80" system properties for service named `web` with exposed port `80`
        dockerCompose.exposeAsSystemProperties(test)
        dockerCompose.servicesInfos
        // get information about container of service `web` (declared in docker-compose.yml)
        def redis = dockerCompose.servicesInfos
        // pass host and exposed TCP port 80 as custom-named Java System properties
        println "REDIS-------------------"
        println redis;
        systemProperty 'docker.redisIP', redis.getHost()
        //systemProperty 'myweb.port', webInfo.ports[80]
    }

And when i run the tests like that
./gradlew test --tests com..... -p api
I am getting this output

:domain:compileJava UP-TO-DATE
:domain:processResources UP-TO-DATE
:domain:classes UP-TO-DATE
:domain:jar UP-TO-DATE
....
REDIS-------------------
{}

Also to point out that the line redis.getHost() i have tried with redis.host and got the same errors and def redis = dockerCompose.servicesInfos.redis

If you need further details let me know.

Regards
Alex.

Capture container log output

Is it possible to capture the logs from the containers?

I often want to inspect to logs of a service and right now to achieve that I need to quickly run

docker logs -f <service name>

in another terminal while the tests are running. Otherwise I can set removeContainers to false but then I want a fresh container each time I run the test.

Docker for Mac has docker-compose in path (/usr/local/bin) but not seen by plugin

If I specify: executable = '/usr/local/bin/docker-compose' then docker-compose command works but docker command fails, (may be docker inspect?). This confirms to me that the PATH is not being taken into account. Instead it is trying to locate docker-compose in the project folder. Note that running docker-compose --version works from the same terminal I launch ./gradlew build. (Gradle version 3.4).

dockerCompose.isRequiredBy(test)

dockerCompose {
	useComposeFiles = ['src/test/docker/docker-compose-test.yml']
    //executable = '/usr/local/bin/docker-compose'
    // useComposeFiles = ['docker-compose.yml', 'docker-compose.prod.yml'] // like 'docker-compose -f <file>'
    // captureContainersOutput = true // prints output of all containers to Gradle output - very useful for debugging
    // stopContainers = false // doesn't call `docker-compose down` - useful for debugging
    // removeContainers = false
    // removeImages = "None" // Other accepted values are: "All" and "Local"
    // removeVolumes = false
    // projectName = 'my-project' // allow to set custom docker-compose project name (defaults to directory name)
    // executable = '/path/to/docker-compose' // allow to set the path of the docker-compose executable if not present in PATH
    // environment.put 'BACKEND_ADDRESS', '192.168.1.100' // Pass environment variable to 'docker-compose' for substitution in compose file
}
Caused by: net.rubygrapefruit.platform.NativeException: Could not start 'docker-compose'
        at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:27)
        at net.rubygrapefruit.platform.internal.WrapperProcessLauncher.start(WrapperProcessLauncher.java:36)
        at org.gradle.process.internal.ExecHandleRunner.run(ExecHandleRunner.java:68)
        ... 2 more
Caused by: java.io.IOException: Cannot run program "docker-compose" (in directory "/Users/myusername/subdir/subdir/testApp"): error=2, No such file or directory
        at net.rubygrapefruit.platform.internal.DefaultProcessLauncher.start(DefaultProcessLauncher.java:25)
        ... 4 more
Caused by: java.io.IOException: error=2, No such file or directory
        ... 5 more

Support overriding parameters from ComposeExtension

Hey guys. It would be nice to fine tune some of the parameters/variables from ComposeExtension within the dockerCompose block of build.gradle.

As stated in the Tips section of README.md "All properties in dockerCompose have meaningful default values so you don't have to touch it" but I think there are some scenarios where overriding would enhance the user experience.

If you startup a container that needs some more time (e.g. 20 seconds) then waitForOpenTcpPorts will output "Waiting for TCP socket on ${service.host}:${forwardedPort} of service '${service.name}' (${e.message})" about 20 times because waitAfterTcpProbeFailure is set to just 1 second.

In this case it would be convinient if I could set the waitAfterTcpProbeFailure to for instance 5s within build.gradle.

Thanks for this great plugin.

groovy.lang.MissingMethodException: No signature of method: java.util.LinkedHashMap$LinkedValues.head()

Hey,
I'm trying to use your plugin on TeamCity CI, but I'm getting groovy exception like this:

Caused by: groovy.lang.MissingMethodException: No signature of method: java.util.LinkedHashMap$LinkedValues.head() is applicable for argument types: () values: [] [14:12:31][Gradle failure report] Possible solutions: clear(), clear(), max(), find(), find(), add(java.lang.Object) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp.getServiceHost(ComposeUp.groovy:108) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp$getServiceHost.callCurrent(Unknown Source) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp.createServiceInfo(ComposeUp.groovy:60) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp$_loadServicesInfo_closure4.doCall(ComposeUp.groovy:53) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp.loadServicesInfo(ComposeUp.groovy:53) [14:12:31][Gradle failure report] at com.avast.gradle.dockercompose.tasks.ComposeUp.up(ComposeUp.groovy:41) [14:12:31][Gradle failure report] at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:63) [14:12:31][Gradle failure report] at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.doExecute(AnnotationProcessingTaskFactory.java:218) [14:12:31][Gradle failure report] at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:211) [14:12:31][Gradle failure report] at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$StandardTaskAction.execute(AnnotationProcessingTaskFactory.java:200) [14:12:31][Gradle failure report] at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:585) [14:12:31][Gradle failure report] at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:568) [14:12:31][Gradle failure report] at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:80) [14:12:31][Gradle failure report] at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:61)

It's about this line:
https://github.com/avast/docker-compose-gradle-plugin/blob/master/src/main/groovy/com/avast/gradle/dockercompose/tasks/ComposeUp.groovy#L108
and seems like groovy incompatibility problem.

Versions:
docker-compose-gradle-plugin: 0.2.85
groovy: 2.4.4

Update:
working for:
docker-compose-gradle-plugin: 0.1.59

Multiple tasks composeUp/Down

Hi,

First of all, great plug-in, simple yet powerful. There is plenty of gradle-docker plugins. Your was like fifth or sixth I've tried and decided not to look any further.

But i have a question, have you considered functionality of multiple compose configs in one gradle project?
Now its impossible to have two separate compose.ymls. One for let's say integration tests, and second for hmm other integration tests.
I don't know if its clear. First set of tests are some units that test for example dao classes against some database which runs inside docker and is started by your plug-in. The second tests are full blown integration tests that require different compose.yml where application being developed is runn the way it will be running in production.
Now its possible to accomplish it with multiple gradle projects. But I've discovered that your code is quite elastic and it is possible to do things like:

import com.avast.gradle.dockercompose.tasks.ComposeUp
task xxx(type: ComposeUp){
    extension = new com.avast.gradle.dockercompose.ComposeExtension(project, xxx,
            null);
    extension.useComposeFiles = ['src/test/docker/docker4test-sample/docker-compose.yml']

}

Then I can have second to composeUp task with it's own configuration.

So i think it's simple to make it work in less disgusting way, only problem is how to overcome fact that tasks need each other references (Up and Down), but imho doable. Would you consider such change? I'm asking because I don't know whether to prepare PR or just do it for myself by just wrapping your plug-in.

Support for -p parameter

It would be really beneficial for us (ff :)) to have a support for -p parameter. It would solve us a problem when there are multiple jobs running concurrently and trying to create containers with same name.

Different probing behavior on Mac and Linux

First, thanks for your great work on this plugin, it's helped my project a lot.

I have a docker-compose config with a couple of services. I use plugin version 0.4.5.

On Mac, composeUp gives this output:

foo uses an image, skipping
elasticsearch uses an image, skipping
mailhog uses an image, skipping
postgres uses an image, skipping
bar uses an image, skipping
Creating network "e2eunversioned_default" with the default driver
Creating e2eunversioned_foo_1 ... 
Creating e2eunversioned_mailhog_1 ... 
Creating e2eunversioned_elasticsearch_1 ... 
Creating e2eunversioned_postgres_1 ... 
Creating e2eunversioned_foo_1
Creating e2eunversioned_elasticsearch_1
Creating e2eunversioned_mailhog_1
Creating e2eunversioned_elasticsearch_1 ... done
Creating e2eunversioned_bar_1 ... 
Creating e2eunversioned_bar_1 ... done
<-------------> 0% EXECUTING                                Will use localhost as host of foo
Will use localhost as host of elasticsearch
Will use localhost as host of mailhog
Will use localhost as host of postgres
Will use localhost as host of bar
Probing TCP socket on localhost:32950 of service 'foo_1'
Waiting for TCP socket on localhost:32950 of service 'foo_1' (TCP connection on localhost:32950 of service 'foo_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32950 of service 'foo_1' (TCP connection on localhost:32950 of service 'foo_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32950 of service 'foo_1' (TCP connection on localhost:32950 of service 'foo_1' was disconnected right after connected)
TCP socket on localhost:32950 of service 'foo_1' is ready
Probing TCP socket on localhost:32955 of service 'elasticsearch_1'
TCP socket on localhost:32955 of service 'elasticsearch_1' is ready
Probing TCP socket on localhost:32954 of service 'elasticsearch_1'
TCP socket on localhost:32954 of service 'elasticsearch_1' is ready
Probing TCP socket on localhost:32953 of service 'mailhog_1'
TCP socket on localhost:32953 of service 'mailhog_1' is ready
Probing TCP socket on localhost:32952 of service 'mailhog_1'
TCP socket on localhost:32952 of service 'mailhog_1' is ready
Probing TCP socket on localhost:32951 of service 'postgres_1'
TCP socket on localhost:32951 of service 'postgres_1' is ready
Probing TCP socket on localhost:32956 of service 'bar_1'
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
Waiting for TCP socket on localhost:32956 of service 'bar_1' (TCP connection on localhost:32956 of service 'bar_1' was disconnected right after connected)
TCP socket on localhost:32956 of service 'bar_1' is ready

BUILD SUCCESSFUL

As you can see, my own artifacts, 'foo' and 'bar', take some time to startup. But everything works as expected, great.

On Linux, I see this:

:browser-testing:composeUp
foo uses an image, skipping
elasticsearch uses an image, skipping
mailhog uses an image, skipping
postgres uses an image, skipping
bar uses an image, skipping
Creating network "e2euntagged5695g1834d8a_default" with the default driver
Creating e2euntagged5695g1834d8a_foo_1 ... 
Creating e2euntagged5695g1834d8a_postgres_1 ... 
Creating e2euntagged5695g1834d8a_elasticsearch_1 ... 
Creating e2euntagged5695g1834d8a_mailhog_1 ... 
Creating e2euntagged5695g1834d8a_foo_1
Creating e2euntagged5695g1834d8a_elasticsearch_1
Creating e2euntagged5695g1834d8a_postgres_1
Creating e2euntagged5695g1834d8a_mailhog_1

Creating e2euntagged5695g1834d8a_foo_1 ... done

Creating e2euntagged5695g1834d8a_postgres_1 ... done

Creating e2euntagged5695g1834d8a_mailhog_1 ... done
Creating e2euntagged5695g1834d8a_bar_1 ... 
Creating e2euntagged5695g1834d8a_bar_1


Will use 172.20.0.1 (network e2euntagged5695g1834d8a_default) as host of foo
Will use 172.20.0.1 (network e2euntagged5695g1834d8a_default) as host of elasticsearch
Will use 172.20.0.1 (network e2euntagged5695g1834d8a_default) as host of mailhog
Will use 172.20.0.1 (network e2euntagged5695g1834d8a_default) as host of postgres
Will use 172.20.0.1 (network e2euntagged5695g1834d8a_default) as host of bar
Probing TCP socket on 172.20.0.1:10381 of service 'foo_1'
TCP socket on 172.20.0.1:10381 of service 'foo_1' is ready
Probing TCP socket on 172.20.0.1:10383 of service 'elasticsearch_1'
TCP socket on 172.20.0.1:10383 of service 'elasticsearch_1' is ready
Probing TCP socket on 172.20.0.1:10382 of service 'elasticsearch_1'
TCP socket on 172.20.0.1:10382 of service 'elasticsearch_1' is ready
Probing TCP socket on 172.20.0.1:10386 of service 'mailhog_1'
TCP socket on 172.20.0.1:10386 of service 'mailhog_1' is ready
Probing TCP socket on 172.20.0.1:10385 of service 'mailhog_1'
TCP socket on 172.20.0.1:10385 of service 'mailhog_1' is ready
Probing TCP socket on 172.20.0.1:10384 of service 'postgres_1'
TCP socket on 172.20.0.1:10384 of service 'postgres_1' is ready
Probing TCP socket on 172.20.0.1:10387 of service 'bar_1'
TCP socket on 172.20.0.1:10387 of service 'bar_1' is ready

BUILD SUCCESSFUL

As you can see, here services foo and bar seem to come up almost instantly, which is suspect.

Further inspection of those containers shows that the plugin seems to report a false positive: I can see that the foo and bar are not up yet. The ports however are open, but there's never any reply. The peer also does not close the connection. On mac, the peer close the connection without a reply.

Do you have a clue what's going on?

This may very well be due to something I don't quite understand about docker networking, in which case I guess I should ask the question somewhere else.

Support for scale

Hi, what about adding support for docker-compose scale?

docker-compose people are working on some changes in this matter ( docker/compose#1661) but until they get it sorted out it would be nice not having to hardcode execs it in gradle...

Improve docker-compose configuration detection

I usually use this plugin in a multi-module project where docker-compose.yml is located in the root project directory. ComposeUp and ComposeDown tasks work fine (they depend on docker-compose behavior), but healthcheck will not take place because serviceInfos will not be resolved (it is looking for docker-compose.yml in the current directory, not in the rootProject.dir):

https://github.com/avast/docker-compose-gradle-plugin/blob/c4d794c72147e4a13136f15aa6d2387a91c878a2/src/main/groovy/com/avast/gradle/dockercompose/tasks/ComposeUp.groovy#L70

String[] composeFiles = extension.useComposeFiles.empty ? ['docker-compose.yml', 'docker-compose.override.yml'] : extension.useComposeFiles
composeFiles
            .findAll { project.file(it).exists() }
<...>

I had to debug the plugin to find out that I was missing

dockerCompose {
    useComposeFiles = ['../docker-compose.yml']
}

It would be nice to have a hint that docker-compose file cannot be located in the expected location or even better, improve docker-compose.yml detection, mimicking docker-compose:

https://docs.docker.com/compose/reference/overview/

The -f flag is optional. If you don’t provide this flag on the command line, Compose traverses the working directory and its parent directories looking for a docker-compose.yml and a docker-compose.override.yml file.

ComposeUp task does not work from the IntelliJ Gradle Projects panel

You get the following error:

* What went wrong:
Execution failed for task ':composeUp'.
> A problem occurred starting process 'command 'docker-compose''

It seems that the environment that the Gradle plugin for IntelliJ uses, has not the docker-compose set in the path. Maybe it would be a good idea to add an option to allow to configure manually the path where the docker-compose is located.

Docker Compose is executed even `compileTest` fails

It happens when using dockerCompose.isRequiredBy(test) . The problem is that test depends on compileTest and dockerComposeUp tasks but with undefined order.

The plugin could ensure that dockerComposeUp will be called as last task before the test. For example, we could iterate all dependencies of test task and call mustRunAfter or shouldRunAfter on them. We could make it for all the tasks, or just for compile tasks.

Issue with option --remove-orphans when stopping containers

Issue #72 added support for the --remove-orphans option to both ComposeUp and ComposeDown tasks. However, the implementation of the ComposeDown task adds the said option in the buildup of the 'stop' command and not 'down' where the said option is supported.

I have prepared a PR to fix the said issue:
#78

Specifies the location of the docker-compose executable

Our CI system doesn't have docker-compose installed in the PATH.

Allowing to configure the docker-compose executable (and keeping PATH relative one as default) location can allow the build script to download docker-compose and use this downloaded executable.

Compose-up waits forever if DOCKER_HOST maps the local socket

Hi,

It appears I am stuck in an endless wait on Mac if I have set the DOCKER_HOST environment variable to unix:///var/run/docker.dock.

My guess is that the getServiceHost function tries to get the hostname from the "URL" and yields null. The console output says:

Waiting for TCP socket on null:1234 of service 'foo' (Ambiguous method overloading for method java.net.Socket#<init>.
Cannot resolve which method to invoke for [null, class java.lang.Integer] due to overlapping prototypes between:
        [class java.lang.String, int]
        [class java.net.InetAddress, int])

Unsetting DOCKER_HOST seems to resolve the issue.

Best regards,
Emil

Possible to pull docker container from private registry?

Hi,
I get following issue when i try to pull a docker container from a private registry in hub.docker.com:
Pulling repository docker.io/someprivateRepo/someImage Error: image someprivateRepo/someImage:latest not found

When i try that with docker pull command i have no issues.

So my question is there a restriction? Can i do that?

Is this the correct way to call composeDown twice?

task finalComposeDown(type: com.avast.gradle.dockercompose.tasks.ComposeDown) {
  extension = new ComposeExtension(project, composeUp, composeDown)
}

integrationTest {
  dependsOn composeDown, composeUp
  finalizedBy finalComposeDown
}

I'm new to gradle. Trying to make sure that a mysql container is freshly created before running the integrationTests.

Is there a better way to do this?

Service host is not available for container with HOST networking

For docker container with HOST networking, this is how docker inspect looks like

"Networks": {
                "host": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "52ca975025b511bfc42caf02c4777bdb9ac2570c95c429339364d80d388a2eea",
                    "EndpointID": "0c56d7ee3d9df21d762b0c99ef79b1252476e0cd2cf62a1af0ea588829f1a87e",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ""
                }
            }

Current implementation parses the networks and uses empty string from gateway as hostname. However I think that in that case it should use localhost instead.

Alternative way for health check

I'm not able to use docker 1.12 in production as it is not yet available on AWS Elasticbeanstalk. I'm thinking about adding an alternate way to determine container's healthy state. Port checking does not work for me as they become ready immediately, way before actual application startup is completed. Here are some options that could make it work:

WDYT?

Adding function to do pull

Hey, thanks for the nice plugin. I'd like to see docker-compose pull functionality and would be happy to implement it and do a PR, just wanted your feedback to see if you agreed with the idea. It would be another task you could just add to your build lifecycle. The ComposeExtension class is holding quite a bit of data. Do you think it would make sense to have this as a separate task instead? Doing an analog function of 'build' at the same would probably also make sense.

thanks,
brian

Remove running containers for a failed task

Hi! Thanks for all the great work on this plugin, it's the best one I could find for Gradle.

I'm having one issue with it though. I'm not very familiar with Gradle so maybe I missed something here. I'm trying to stop Docker containers in case integrationTest task failed.

task integrationTest(type: Test) {
    dependsOn composeUp, intTest
    finalizedBy composeDown
}

dockerCompose {
    stopContainers = true
}

With this setup, even if some test failed I see that composeDown is not executed which leaves Docker containers running. Is there a way to stop them regardless of the task result? Thanks!

Save container output to a file?

First, thank you for the great plugin.

Is there an easy way of saving the container output to a file? I see there is a captureContainersOutput option, but that prints the container output to stdout. Since our application can log a lot of things, our gradle build output becomes a fire hose if I turn this on. But frequently, we have tests fail in CI and I would like to see if there were any errors in the application.

I currently have the following code to do this at the end of a build:

composeDown.doFirst {
    mkdir "$buildDir/logs/docker"
    project.exec { ExecSpec e ->
        extension.setExecSpecWorkingDirectory(e)
            e.environment = composeDown.extension.environment
            e.commandLine composeDown.extension.composeCommand('logs', '--no-color', 'app')
            e.standardOutput = new FileOutputStream("$buildDir/logs/docker/app.log")
    }
}

This is somewhat cumbersome though, and relies on the current implementation of the composeDown task. It also doesn't write the log files until the end of the build. Ideally, the captureContainersOutput option could take a file or outputstream to make this easier.

Ability to pass environment vars to docker-compose up

Use case: using the same YML file for integration test environment and for manual QA verification. YML file contains environment variables and there no way to pass them to composeUp task so that docker-compose up/build can interpolate.

I'm primary interested in setting HOST_IP var that is discovered by plugin as ServiceInfo.host but ability to pass any env to docker-compose up/build is quite generic.

Add support for HEALTCHECK Docker feature

Docker 1.12 adds support for HEALTCHECK command in Dockerfile so we can add support for it to the plugin. It means that we will read the status of running containers (using docker inspect). Waiting for open TCP ports remains.

This should also fix the problem with Docker For Mac that opens all exposed ports immediately (even if the application is not running).

Extension to check if container is ready

First of all thank you for a wonderful work you are doing.
I have a suggestion though on possible extension of your plugin.
I am using it for running IT tests for application deployed on WildFly. Unfortunately it exposes 8080 port way earlier then application is actually ready to receive calls. What I suggest is to add something like

 void waitForOkHttp(Map<String, String> urlsToWait, Iterable<ServiceInfo> servicesInfos) {
        Map<String, Integer> servicePorts = [:]

        servicesInfos.forEach { serviceInfo ->
            servicePorts.put(serviceInfo.name, serviceInfo.getTcpPort())
        }

        urlsToWait.forEach { service, u ->
            def urlStr = "http://localhost:" + servicePorts.get(service) + u;
            logger.lifecycle("Waiting for url of service: ${service} to return 200 response status on url: ${urlStr}")

            URL url = new URL(urlStr);

            while (true) {
                logger.info("Waiting...")

                try {
                    HttpURLConnection http = (HttpURLConnection)url.openConnection();
                    int statusCode = http.getResponseCode();

                    if (statusCode < 400) {
                        logger.lifecycle("Service: ${service}, url: ${urlStr} replied with 200.")
                        break;
                    }
                    else {
                        sleep(5000)
                    }
                }
                catch (Exception ex) {
                    sleep(5000)
                }
            }
        }
    }

into UpTask and extend configuration with parameter:

 waitForOkHttp = [
        "application-cotainer": "/rest/ping"
    ]

this will allow to "wait" while application running on application servers is actually ready to receive the calls.

Failing test on windows with boot2docker

HOST network container is broken after #25 on windows using boot2docker.

test.environment.get('WEB_HOST') == 'localhost'
|    |           |               |
|    |           172.17.1.100    false
|    |                           12 differences (0% similarity)
|    |                           (172.17.1.100)
|    |                           (localhost---)

Be able to test against multiple docker versions

Since more functionality is added with newer versions and the old implementation is sometimes changed, I think it might be handy to be able to test the plugin on multiple docker versions...

Support for version 2 compose files

Hi,
if I understand the error message of my build output right, the plugin gets confused by a compose file starting with

version: '2'

or did I oversee something?

Pull task

By far the most reliable docker plugin I've found. Great work!

I'm noticing the lack of a dockerComposePull task.

Is this something you would consider as a PR? I don't see any other way to ensure I'm using the latest docker images.

docker-compose ps does not show the running containers

$ ./gradlew composeUp
:composeUp
mysql uses an image, skipping
default_mysql_1 is up-to-date
Will use localhost as host of mysql
Probing TCP socket on localhost:32776 of service 'mysql_1'
TCP socket on localhost:32776 of service 'mysql_1' is ready

BUILD SUCCESSFUL

Total time: 6.075 secs
$ docker-compose ps
Name   Command   State   Ports
------------------------------

I would expect to be able to see the default_mysql_1 listed in the ps output

More robust Docker host detection

Now it just checks presence of DOCKER_HOST environment variable.

We could try to execute docker-machine and use first or default machine. So use docker-machine ls -q and docker-machine ip default commands.

Timeout in waiting for exposed ports

Introduce timeout in waiting for exposed ports.

If it fails then read output using docker-compose logs $service and print it. It allows to find a bug quickly (typically, you will see an exception during service startup).

Check if open TCP port is not closed immediatelly

Even this commit doesn't help to overcome the issue with immediately exposed ports on Windows and Mac.

Hopefully, I found this answer from Docker staff how the binding exactly works. So we should just change the check - open the TCP connection and check if it's not closed immediately (let say in few millis).

Try to eliminate useNetworkGateway settings

Now, if someone uses Docker for Mac then they must manually set useNetworkGateway=false. It would be great to automatically detect presence of Docker for Mac and if present then automatically use localhost.

Gradle throw error after starting docker

dockerCompose {
    captureContainersOutput = true
    dockerComposeWorkingDirectory = /path/to/docker-compose.yml
}
task ('og-start') {
    group 'OG Platform'
    description 'Starts nextgen platform for the build.'
    doFirst {
      composeUp.up()
    }
    doLast {
        def webInfo = dockerCompose.servicesInfos.og.'og_1'
        //make changes to nginx proxy configuration based upon the port
    }
}

task ('og-stop'){
    group 'OG Platform'
    description 'Stops OG container for the build.'
   doLast {
      composeDown.down()
   }
}

For some reason, sometimes when calling the task og-start it fails with the below error even though the container is started

This build could be faster, please consider using the Gradle Daemon: https://docs.gradle.org/2.8/userguide/gradle_daemon.html

Exception in thread "pool-1-thread-1" org.gradle.process.internal.ExecException: Process 'command 'docker-compose'' finished with non-zero exit value 143
	at org.gradle.process.internal.DefaultExecHandle$ExecResultImpl.assertNormalExitValue(DefaultExecHandle.java:367)
	at org.gradle.process.internal.DefaultExecAction.execute(DefaultExecAction.java:31)
	at org.gradle.api.internal.file.DefaultFileOperations.exec(DefaultFileOperations.java:165)
	at org.gradle.api.internal.project.AbstractProject.exec(AbstractProject.java:803)
	at org.gradle.api.internal.project.AbstractProject.exec(AbstractProject.java:799)
	at org.gradle.api.Project$exec$1.call(Unknown Source)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
	at com.avast.gradle.dockercompose.tasks.ComposeUp$1.run(ComposeUp.groovy:78)
	at java.lang.Thread.run(Thread.java:748)

Any ideas?

Feature Request: improve health check

It looks like docker-compose supports these health statuses:

  • health: starting
  • healthy
  • unhealthy

I configured healthcheck in compose.yml:
healthcheck:
test: ["CMD", "check-health.sh"]
interval: 10s
timeout: 5s
retries: 3

If the container status changes from -> "health: starting" to "unhealthy" it would be great to fail the composeUp Task (including shutting down the stack: composeDown). Otherwise the gradleTask "composeUp" will hang forever and the compose stack will stay alive.
Currently if running this scenario (with unhealthy containers) on a CI server again and again - may finally kill the (dockerhost) CI server.

Would be great to have this feature. What do you think?

Minor error in README

In README example of getting information on specific container is not entirely correct - def webInfo = dockerCompose.servicesInfos.web.'web_1'
One would first need to access containerInfos map and from that get the key web_1 - def webInfo = dockerCompose.servicesInfos.web.containerInfos.'web_1'
Another solution to keep this shorthand format would be to use groovy propertyMissing method like so:

def propertyMissing(String name) {
    containerInfos[name]
}

ComposeUp failing on a command that works when run manually (on MacOS)

Executing task ':composeUp' (up-to-date check took 0.0 secs) due to:
  Task has not declared any outputs.
Starting process 'command 'docker-compose''. Working directory: /Users/src/server Command: docker-compose -p default build
:composeUp FAILED
$ docker-compose -p default build
mysql uses an image, skipping

With exit code 0

Java 7 Support

Is there any reason you don't support Java7?
I compiled the jar with 1.7 and it seemed to work fine.

ServiceInfos expect container name with specific format.

If I add container_name as part of my docker-compose.yml file then I am not able to call dockerCompose.servicesInfos.<service_name>.'<container_name>'. Looking at the code, the reason is due to this. In case the container_name is set then the pattern does not match. Is there a need for the pattern check ? Maybe I am missing something, is there anything wrong if we just use the value of inspection.Name excluding the / ?

NullPointerException from exposeAsEnvironment for container with custom name

If a container has a custom name specified with the container-name option, exposeAsEnvironment() throws an exception with the following stacktrace:

java.lang.NullPointerException: Cannot invoke method endsWith() on null object
        at java_lang_String$endsWith$1.call(Unknown Source)
        at com.avast.gradle.dockercompose.ComposeExtension$_exposeAsEnvironment_closure3$_closure9.doCall(ComposeExtension.groovy:68)
        at com.avast.gradle.dockercompose.ComposeExtension$_exposeAsEnvironment_closure3.doCall(ComposeExtension.groovy:67)
        at com.avast.gradle.dockercompose.ComposeExtension.exposeAsEnvironment(ComposeExtension.groovy:66)
        at com.avast.gradle.dockercompose.ComposeExtension$exposeAsEnvironment$3.call(Unknown Source)
        ...

The issue is that the regex at line 124 of ComposeUp will not necessarily match a custom container name. This leads to a null value of instanceName, which eventually causes the exception.

I will submit a pull request fixing this issue momentarily.

Health check jacoco agent in tcpserver mode

I wish to use jacocoagent with docker as a tcpserver. Problem is that when I expose a port to the agent, the gradle docker compose plugin fails to check health for it. Is there anyway to skip health check for a specific port? The agent server itself works fine, I am able to run jacoco:dump ant task.

Option to remove local images only

Wondering if it would be possible to add an option to remove only locally created images using something like

docker-compose down --rmi local

I could create a PR if there are no objections?

Define services to be started

Thanks for the great plugin.

I just wonder if there is a chance to get a features that allows selecting services from a docker-compose.yml (e.g.: web, db -vs- web)

Currently, I would need to hack around by specifying a collection of compose.yml files. It feels weird somehow.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.