Code Monkey home page Code Monkey logo

moby-ryuk's People

Contributors

alelech avatar bsideup avatar cristianrgreco avatar darrenfoong avatar dbyron-sf avatar dependabot[bot] avatar eddumelendez avatar gesellix avatar hhsnopek avatar hofmeisteran avatar kiview avatar matthewmcnew avatar mdelapenya avatar nikolayk812 avatar psanetra avatar rnorth avatar stevenh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

moby-ryuk's Issues

Support ARM64

I run unit test suites on both x64 and arm64 and use testcontainers-node.

Unfortunately I have to disable the ryuk container on arm64, but I'd like to enable it!

Would it be possible to add an arm64 build of the container?

Ryuk Docker Extension

Docker Extensions lets you use third-party tools within Docker Desktop to extend its functionality. Extensions allows developers to seamlessly connect their favorite development tools to your application development and deployment workflows.

It would be great to have Ryuk UI and use it as a Docker Extension. What do you think?

How to log ryuk logs

Hello
I am using testcontainers for java (kotlin) and I've enabled ryuk verbouse output in testcontainers.properties. But it only gives me information about pulling / starting the container. But it writes nothing after tests execution. Is there any way to log the result of ryuk container?

Allow to register shutdown hooks for containers terminated by Ryuk

Sometimes containers need to execute some shut down logic. In the code I can override a lifecycle method, something like containerIsStopping in Testcontainers-java for example and execute custom shutdown logic.

But if my containers are garbage collected by Ryuk, it's currently impossible (or very hard) to provide a shutdown logic.

I'd like to be able to register a shutdown hook for Ryuk, it can be as simple as enabling the behavior and Ryuk checking for a particular file path and running that file if exists before killing the container.

This will allow me:

  • enable shutdown hooks in Ryuk
  • copy the script with the shutdown logic into the container
  • override the containerIsStopping with calling the same file in the container

And my containers will clean up resources whether they are stopped by code or Ryuk.

Unable to build Ryuk with s390x (buildkit/qemu)

The multi-arch buildkit build is currently failing for the s390x platform, with a segmentation fault. This is despite no changes since #21, which was working in December.

This is replicable with GitHub Actions and Docker for Mac 3.3.0 (not tried older versions).

Many go subcommands fail (e.g. build, vet, fmt). mod download fails when using newer versions of Go.

So far all of the following avenues have proved fruitless:

  • Pinning buildkit/binfmt to various versions
  • Changing the tag used for golang build up to 1.16.3
  • Changing the golang base image (from buster to alpine)

Quick analysis of the core dumps doesn't yield anything obvious to me.

can not connect docker.sock

00:11:38.397 [main] DEBUG o.t.c.o.WaitingConsumer - STDERR: panic: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? 00:11:38.397 [main] DEBUG o.t.c.o.WaitingConsumer - STDERR: 2023/09/16 16:11:38 Pinging Docker...
when i upgrade my docker desktop version to 4.22.1
i found i cannot start my program
i have ticked the box
image
i checked the sock file in my computer
image
but i also cannot run mysql program

GKE / containerd - Pods unable to connect to Ryuk

Hi,

I'm working on Gitlab Runners powered by GKE, i'm running tests with a Gradle project which include calls to a testcontainer Ryuk which always fails no matter how I try it.

Here's the log showing the error :

Note: Recompile with -Xlint:deprecation for details.

> Task :processTestResources
> Task :testClasses
> Task :test

xxxxxxxxxxxTest > initializationError FAILED
    java.lang.IllegalStateException at RyukResourceReaper.java:132
09:48:00.342 [testcontainers-ryuk] WARN org.testcontainers.utility.RyukResourceReaper - Can not connect to Ryuk at localhost:49153
java.net.ConnectException: Connection refused
        at java.base/sun.nio.ch.Net.pollConnect(Native Method)
        at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
        at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:542)
        at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597)
        at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327)
        at java.base/java.net.Socket.connect(Socket.java:633)
        at org.testcontainers.utility.RyukResourceReaper.lambda$null$0(RyukResourceReaper.java:92)
        at org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
        at org.testcontainers.utility.RyukResourceReaper.lambda$maybeStart$1(RyukResourceReaper.java:88)
        at java.base/java.lang.Thread.run(Thread.java:833)

Since GKE push to use containerd instead of no matter which Kubernetes version we use, I tried with the following lines in the Gitlab Runner manifest :

- name: "TESTCONTAINERS_HOST_OVERRIDE"
  valueFrom:
    fieldRef:
      fieldPath: status.hostIP

But the error message remains...

For now, I have no other choice to continue to use the Gitlab Runners running on an oldest cluster...but the idea is to get rid of this one and use the newest one.

Any help will be good, thanks in advance.

volumes are not deleted on docker 24.0.1

Context

Seems related to docker/cli#4028

How to reproduce with the CLI

  1. Create a volume
    $ docker volume create foo --label foo=bar
  2. Try to prune it with a simple filter:
    $ docker volume prune --filter "label=foo=bar"
    WARNING! This will remove anonymous local volumes not used by at least one container.
    Are you sure you want to continue? [y/N] y
    Total reclaimed space: 0B
    $ docker volume ls --filter "label=foo=bar"
    DRIVER    VOLUME NAME
    local     foo

The volume is still there, not removed.

Workaround with the CLI

$ docker volume prune --filter "label=foo=bar" --filter "all=1"
WARNING! This will remove anonymous local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Volumes:
foo

Total reclaimed space: 0B
$ docker volume ls --filter "label=foo=bar"
DRIVER    VOLUME NAME

What has this to do with Ryuk ?

Ryuk has the same behaviour as the Docker CLI. It emits a command without all=1 that has no effect on the volumes.

docker image should be periodically refreshed

At our organization we did adopt a policy of not using stale docker images . Looking at ryuk, it seems to be based on alpine:latest but actually wasn't rebuild for a long time (latest version on quay.io was created 1y ago) and so it looks like its stuck on alpine 3.7.0.
Would it be possible to do a rebuild of ryuk image or even put some schedule to do it periodically (or somehow triggered on alpine update) so that it does not depend on stale alpine image ?

ryuk also deletes non-testcontainers volumes and images on API version < 1.30

ryuk logs:

2021/11/16 16:18:55 Pinging Docker...
2021/11/16 16:18:55 Docker daemon is available!
2021/11/16 16:18:55 Starting on port 8080...
2021/11/16 16:18:55 Started!
2021/11/16 16:18:55 New client connected: 172.17.0.1:53188
2021/11/16 16:18:55 Received the first connection
2021/11/16 16:18:55 Adding {"label":{"org.testcontainers.sessionId=ab7cfae4-20c9-49bf-aa9f-4fc785e64b3f":true,"org.testcontainers=true":true}}
2021/11/16 16:19:11 EOF
2021/11/16 16:19:11 Client disconnected: 172.17.0.1:53188
2021/11/16 16:19:21 Timed out waiting for re-connection
2021/11/16 16:19:21 Deleting {"label":{"org.testcontainers.sessionId=ab7cfae4-20c9-49bf-aa9f-4fc785e64b3f":true,"org.testcontainers=true":true}}
2021/11/16 16:19:37 Removed 0 container(s), 0 network(s), 58 volume(s) 977 image(s)

--> This deleted 58 volume(s) and 977 image(s). None of them were created by testcontainers.

I use the latest testcontainers 1.16.2. Docker is on quite an old version 1.13.1.

[ujpr01@tools-test-01 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.7 (Maipo)

[ujpr01@tools-test-01 ~]$ docker version
Client:
 Version:         1.13.1
 API version:     1.26
 Package version: docker-1.13.1-162.git64e9980.el7_8.x86_64
 Go version:      go1.10.3
 Git commit:      64e9980/1.13.1
 Built:           Mon Jun 22 03:20:20 2020
 OS/Arch:         linux/amd64

Server:
 Version:         1.13.1
 API version:     1.26 (minimum version 1.12)
 Package version: docker-1.13.1-162.git64e9980.el7_8.x86_64
 Go version:      go1.10.3
 Git commit:      64e9980/1.13.1
 Built:           Mon Jun 22 03:20:20 2020
 OS/Arch:         linux/amd64
 Experimental:    false

On Windows (Docker Desktop 3.6.0) I don't observe this problem.

Support s390x architecture

@BbolroC has requested zipkin to support s390x architecture, and we use testcontainers. #15 has notes about arm64 which are relevant here. I noticed that if you use very latest qemu.. it might work, but I can't tell if go mod download is slow or hung as it never completes when I run on s390x

Would it be possible to make the image pruning optional?

For some time now Ryuk is also cleaning up images (#13, #25).
It does so with an ImageSearch filter using labels (see Testcontainer's RyukResourceReaper class or Ryuk's README).

With older docker versions where ImageSearch's filter doesn't support labels (e.g. Docker 1.13) this results in deleting ALL currently unused images, see this discussion with @kiview on stackoverflow.

In my CI/CD workflow (that runs on a build server with Docker 1.13, without the possibility to upgrade to a newer Docker version) I was forced to disable Ryuk altogether (the pipeline builds a docker image that is run by Testcontainers to perform tests, and only when the image passes all tests it is then promoted to the next pipeline steps), because after a successful test with Testcontainers the image was not referenced anymore and thus removed by Ryuk.

So here's my question (I'm not familiar with GO and don't know what is easily possible...):

Would it be possible to make the image pruning optional with Ryuk? Or should there be rather a warning and/or documentation in Testcontainers itself that Ryuk should be disabled for older Docker versions?

How to specify the delay?

The readme says:
"project helps you to remove ... ... after specified delay"

Could you please provide an example how to specify the mentioned delay?

Merge changes from PSanetra/moby-ryuk fork

As testcontainers/testcontainers-dotnet is using PSantera/moby-ryuk fork would you please consider merging those changes back so that they could switch back to testcontainers/moby-ryuk?

Not allowed to prune networks in paralell

When trying to remove networks, these messages might appear in the logs:
level=error msg="Handler for POST /v1.29/networks/prune returned error: a prune operation is already running"

Support docker image pruning as well

The current Ryuk can now prune containers, networks and volumes but it cannot prune docker images.

Since now testcontainer-java can now create images and run then on the fly, Ryuk should be able to prune them.

They are now pruned using a shutdown hook in Java, but I think that could not be absolutely reliable. That's probably the reason why Ryuk was created in the first place.

So, I hope Ryuk can support pruning images as well.

Limit Memory Usage

We run test-containers/ryuk in a parallel maven-multi-module environment. Running tests in parallel often causes memory issues and the build crashes.
I noticed, that the ryuk containers are quite hungry for memory. Is it possible to limit that?
grafik

Publish to GitHub Docker Registry

In zipkin, we'd like to reduce the risk of build outages for anyone running our build (ex ourselves, as well as people forking zipkin of which there are many). An easy way is to stop using docker hub registry during the build process. We've eliminated nearly all image pulls we can find, but I suspect ryuk is still implicitly pulling from docker hub. Can you publish this also to GitHub Container registry?

Note: So far, we've not found a mirror for alpine that supports arm64, so while publishing to github should be fine, you might have trouble with FROM. That said, consumers can still benefit even if this repo can't purely depend on GHCR.

networks are not automatically removed

I am running a docker compose with testcontainers. But I do not explicitly stop the DockerComposeContainer at the end of my test (because the container is statically reused by multiple test in the test suite). I was expecting the moby-ryuk to delete the containers and the networks at the end. It actually does delete the container correctly, but it does not seem to delete the networks.

~/devel/docker-fonse$ docker ps
CONTAINER ID        IMAGE                               COMMAND                  CREATED             STATUS                  PORTS                          NAMES
8c7a7503accb        confluentinc/cp-zookeeper:5.1.0     "/etc/confluent/dock…"   1 second ago        Up Less than a second   2181/tcp, 2888/tcp, 3888/tcp   z2wsbcee3xqm_zookeeper_1
0a1b8c2ba32f        docker/compose:1.8.0                "/usr/bin/docker-com…"   3 seconds ago       Up 2 seconds                                           quizzical_jennings
686039115d69        quay.io/testcontainers/ryuk:0.2.2   "/app"                   8 seconds ago       Up 7 seconds            0.0.0.0:32827->8080/tcp        testcontainers-ryuk-fcc1e2e8-88a6-448b-98f3-510af7731e0a
~/devel/docker-fonse$ docker logs testcontainers-ryuk-fcc1e2e8-88a6-448b-98f3-510af7731e0a -f
2019/01/08 18:42:54 Starting on port 8080...
2019/01/08 18:42:54 Connected
2019/01/08 18:42:54 Adding {"label":{"org.testcontainers.sessionId=fcc1e2e8-88a6-448b-98f3-510af7731e0a":true,"org.testcontainers=true":true}}
2019/01/08 18:42:54 Adding {"label":{"com.docker.compose.project=z2wsbcee3xqm":true}}
2019/01/08 18:43:26 EOF
2019/01/08 18:43:26 Disconnected
2019/01/08 18:43:36 Timed out waiting for connection
2019/01/08 18:43:36 Deleting {"label":{"org.testcontainers.sessionId=fcc1e2e8-88a6-448b-98f3-510af7731e0a":true,"org.testcontainers=true":true}}
2019/01/08 18:43:36 Deleting {"label":{"com.docker.compose.project=z2wsbcee3xqm":true}}
2019/01/08 18:43:38 Removed 6 container(s), 0 network(s), 0 volume(s)
~/devel/docker-fonse$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
~/devel/docker-fonse$ docker network list
NETWORK ID          NAME                  DRIVER              SCOPE
12cb5757d5e4        bridge                bridge              local
c7829c02e770        host                  host                local
7b2bb170377d        none                  null                local
3b836122fd7c        z2wsbcee3xqm_patate   bridge              local
~/devel/docker-fonse$ 

Is this the expected behavior? I was under the impression that the ryuk was meant exactly for this purpose...

Support Podman

Is this a BUG REPORT or FEATURE REQUEST?

/kind FEATURE REQUEST

Description

ryuk doesn't work with podman

Context

podman info
host:
  arch: amd64
  buildahVersion: 1.19.4
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.26-1.fc33.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.26, commit: 777074ecdb5e883b9bec233f3630c5e7fa37d521'
  cpus: 8
  distribution:
    distribution: fedora
    version: "33"
  eventLogger: journald
  hostname: fedora
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.10.19-200.fc33.x86_64
  linkmode: dynamic
  memFree: 893497344
  memTotal: 12438728704
  ociRuntime:
    name: crun
    package: crun-0.18-1.fc33.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.18
      commit: 808420efe3dc2b44d6db9f1a3fac8361dde42a95
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    selinuxEnabled: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.fc33.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
  swapFree: 3499356160
  swapTotal: 4294963200
  uptime: 26h 5m 54.09s (Approximately 1.08 days)
registries:
  search:
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/msa/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 0
    stopped: 2
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.4.0-1.fc33.x86_64
      Version: |-
        fusermount3 version: 3.9.3
        fuse-overlayfs: version 1.4
        FUSE library version 3.9.3
        using FUSE kernel interface version 7.31
  graphRoot: /home/msa/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 35
  runRoot: /run/user/1000/containers
  volumePath: /home/msa/.local/share/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 1613753777
  BuiltTime: Fri Feb 19 13:56:17 2021
  GitCommit: ""
  GoVersion: go1.15.8
  OsArch: linux/amd64
  Version: 3.0.1

Evidence

podman run -it --rm -v $PWD:$PWD -w $PWD -v /run/user/1000/podman/podman.sock:/run/user/1000/podman/podman.sock docker.io/alpine:3.5 echo ok 
ok
podman run -it --rm -v $PWD:$PWD -w $PWD -v /run/user/1000/podman/podman.sock:/run/user/1000/podman/podman.sock docker.io/testcontainersofficial/ryuk 
2021/03/07 01:29:28 Pinging Docker...
panic: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

goroutine 1 [running]:
main.main()
	/go/src/github.com/testcontainers/moby-ryuk/main.go:36 +0x457

Ryuk and associated containers/networks can leak if client does not respond

We run tests in an environment where some java test clients use testcontainers on various OS/JDK levels and connect to a pool of docker-host machines when they need to use testcontainers.

In some cases (<5% of the time) the ryuk container and its associated resources do not get cleaned up:

$ docker ps
CONTAINER ID        IMAGE                               COMMAND                  CREATED             STATUS              PORTS                                              NAMES
c7ec0ab0bd19        alpine/socat:latest                 "/bin/sh -c 'socat T…"   5 days ago          Up 5 days           0.0.0.0:37663->2181/tcp, 0.0.0.0:37662->9093/tcp   testcontainers-socat-bSS4X5wg
89277a928e81        alpine/socat:latest                 "/bin/sh -c 'socat T…"   5 days ago          Up 5 days           0.0.0.0:37658->2181/tcp, 0.0.0.0:37657->9093/tcp   testcontainers-socat-8gV5vMsL
5bba87ed9019        quay.io/testcontainers/ryuk:0.2.3   "/app"                   5 days ago          Up 5 days           0.0.0.0:37637->8080/tcp                            testcontainers-ryuk-4bfb265d-8e31-414b-b56e-3bde11d4f118
33cdc8a161f6        alpine/socat:latest                 "/bin/sh -c 'socat T…"   2 weeks ago         Up 2 weeks          0.0.0.0:36235->2181/tcp, 0.0.0.0:36234->9093/tcp   testcontainers-socat-Zj1Pdz7g
1c4c293b6a75        alpine/socat:latest                 "/bin/sh -c 'socat T…"   2 weeks ago         Up 2 weeks          0.0.0.0:36230->2181/tcp, 0.0.0.0:36229->9093/tcp   testcontainers-socat-8oPGHyAE
d51160d1a4a1        quay.io/testcontainers/ryuk:0.2.3   "/app"                   2 weeks ago         Up 2 weeks          0.0.0.0:36225->8080/tcp                            testcontainers-ryuk-da666da0-de77-409c-a594-0f5a0f49a2c2
4ccdb2407c12        alpine/socat:latest                 "/bin/sh -c 'socat T…"   2 weeks ago         Up 2 weeks          0.0.0.0:36172->2181/tcp, 0.0.0.0:36171->9093/tcp   testcontainers-socat-mqLHvDuq
457a5b5a02ec        alpine/socat:latest                 "/bin/sh -c 'socat T…"   2 weeks ago         Up 2 weeks          0.0.0.0:36167->2181/tcp, 0.0.0.0:36166->9093/tcp   testcontainers-socat-M6HuSktL
8b2f0966f51b        quay.io/testcontainers/ryuk:0.2.3   "/app"                   2 weeks ago         Up 2 weeks          0.0.0.0:36162->8080/tcp                            testcontainers-ryuk-f6edb35f-d765-4851-97ab-84c866980f79
3677a5c868ac        alpine/socat:latest                 "/bin/sh -c 'socat T…"   2 weeks ago         Up 2 weeks          0.0.0.0:35863->2181/tcp, 0.0.0.0:35862->9093/tcp   testcontainers-socat-bgNGAdry
c422c5c9065d        alpine/socat:latest                 "/bin/sh -c 'socat T…"   2 weeks ago         Up 2 weeks          0.0.0.0:35858->2181/tcp, 0.0.0.0:35857->9093/tcp   testcontainers-socat-SX4Wutx7
19cea6353e39        quay.io/testcontainers/ryuk:0.2.3   "/app"                   2 weeks ago         Up 2 weeks          0.0.0.0:35853->8080/tcp                            testcontainers-ryuk-8bb47a42-2efb-4f33-91dd-94a5be4856f2
4b7845b70f41        alpine/socat:latest                 "/bin/sh -c 'socat T…"   2 weeks ago         Up 2 weeks          0.0.0.0:35319->2181/tcp, 0.0.0.0:35318->9093/tcp   testcontainers-socat-C8gj32mm
7e195ade0fae        alpine/socat:latest                 "/bin/sh -c 'socat T…"   2 weeks ago         Up 2 weeks          0.0.0.0:35314->2181/tcp, 0.0.0.0:35313->9093/tcp   testcontainers-socat-YFFpffSv
10db2025ff30        quay.io/testcontainers/ryuk:0.2.3   "/app"                   2 weeks ago         Up 2 weeks          0.0.0.0:35309->8080/tcp                            testcontainers-ryuk-8ff9a71c-f6d9-452a-a314-72ea66c8b302
c35a061c08b2        quay.io/testcontainers/ryuk:0.2.3   "/app"                   2 weeks ago         Up 2 weeks          0.0.0.0:35287->8080/tcp                            testcontainers-ryuk-1476b6b9-0dc3-4b67-9dc6-5b1aae43bb25
51045f686d7f        alpine/socat:latest                 "/bin/sh -c 'socat T…"   2 weeks ago         Up 2 weeks          0.0.0.0:35245->2181/tcp, 0.0.0.0:35244->9093/tcp   testcontainers-socat-iB7nP1E5
dd9f0676d322        alpine/socat:latest                 "/bin/sh -c 'socat T…"   2 weeks ago         Up 2 weeks          0.0.0.0:35240->2181/tcp, 0.0.0.0:35239->9093/tcp   testcontainers-socat-DEvvAs32
b5eb6b5fde2b        quay.io/testcontainers/ryuk:0.2.3   "/app"                   2 weeks ago         Up 2 weeks          0.0.0.0:35235->8080/tcp                            testcontainers-ryuk-294048ad-dfe0-4625-b2bf-10de5b906093
4c8977c572b0        alpine/socat:latest                 "/bin/sh -c 'socat T…"   3 weeks ago         Up 3 weeks          0.0.0.0:34740->2181/tcp, 0.0.0.0:34739->9093/tcp   testcontainers-socat-NVaK6xDn
2b607dd90d30        quay.io/testcontainers/ryuk:0.2.3   "/app"                   3 weeks ago         Up 3 weeks          0.0.0.0:34731->8080/tcp                            testcontainers-ryuk-4f7b43e6-437e-49cc-b303-ba3d91f8b191
5bf93a64f64a        alpine/socat:latest                 "/bin/sh -c 'socat T…"   3 weeks ago         Up 3 weeks          0.0.0.0:34634->2181/tcp, 0.0.0.0:34633->9093/tcp   testcontainers-socat-DXhYF1hG
f49ae979ff34        alpine/socat:latest                 "/bin/sh -c 'socat T…"   3 weeks ago         Up 3 weeks          0.0.0.0:34625->2181/tcp, 0.0.0.0:34624->9093/tcp   testcontainers-socat-Yfy1SSaz
37c2c873146b        quay.io/testcontainers/ryuk:0.2.3   "/app"                   3 weeks ago         Up 3 weeks          0.0.0.0:34620->8080/tcp                            testcontainers-ryuk-2efbf29f-327d-4b4b-8f92-1f496fe71aca
31724dd761f3        alpine/socat:latest                 "/bin/sh -c 'socat T…"   3 weeks ago         Up 3 weeks          0.0.0.0:34103->2181/tcp, 0.0.0.0:34102->9093/tcp   testcontainers-socat-npmRFPyW
0a085bb26a8b        alpine/socat:latest                 "/bin/sh -c 'socat T…"   3 weeks ago         Up 3 weeks          0.0.0.0:34098->2181/tcp, 0.0.0.0:34097->9093/tcp   testcontainers-socat-ei1xCFrC
720b2f43d41c        quay.io/testcontainers/ryuk:0.2.3   "/app"                   3 weeks ago         Up 3 weeks          0.0.0.0:34093->8080/tcp                            testcontainers-ryuk-a7a1cedd-1e5c-4391-b98e-c7efce05090e

I know for certain that the client JVMs are gone, because the test client machines are VMs with a maximum TTL as well. If the test VMs go for to long then the machine is forcibly removed.

here are the complete logs for a ryuk container that's been up for 3 weeks:

2019/08/06 15:25:49 Starting on port 8080...
2019/08/06 15:25:49 Connected
2019/08/06 15:25:49 Adding {"label":{"org.testcontainers.sessionId=a7a1cedd-1e5c-4391-b98e-c7efce05090e":true,"org.testcontainers=true":true}}

One problem is that our test client starting the alpine/socat:latest image was not explicitly stopping it, and relying on Ryuk to clean it up 100% of the time. We will correct this, but it would still be good to have a sort of safety net on the Ryuk containers.

Proposed solution:

I think the Ryuk containers should have a configurable max TTL for situations where the client ceases to respond. By default the TTL could be 0 (infinite) so existing behavior is not impacted.

Ryuk does not prune volumes (Moby >= v23.0.0, DD >= 4.19.0)

The latest release of Moby (v23.0.0), which is included in Docker Desktop 4.19.0, introduces a breaking API change. It appears that the change is not "backwards compatible":

API: Only anonymous volumes now pruned by default on API version >= v1.42. Pass the filter all=true to prune named volumes in addition to anonymous. moby/moby#44259

To prune not just anonymous volumes, the filter all=true is required: v1.42 vs. v1.41. Maybe we can use the NewVersionError method to determine whether we need to include the additional filter or not. Following change fixes the issue for the latest version:

diff --git a/main.go b/main.go
index d59a3ee..fad8a74 100644
--- a/main.go
+++ b/main.go
@@ -290,6 +290,7 @@ func prune(cli *client.Client, deathNote *sync.Map) (deletedContainers int, dele
 		})
 
 		_ = try.Do(func(attempt int) (bool, error) {
+			args.Add("all", "true")
 			volumesPruneReport, err := cli.VolumesPrune(context.Background(), args)
 			for _, volumeName := range volumesPruneReport.VolumesDeleted {
 				deletedVolumesMap[volumeName] = true

We need to create a copy of args otherwise we include the filter in other prune operations too. Best we consider this for images too.

New Release

Hi, do you have an ETA for a new release that contains the latest changes?

testcontainers/ryuk:0.3.0 disappeared from quay.io

For unknown reasons the latest stable version 0.3.0 seem to have disappeared from quay.io. I'm pretty sure it has been there because it's the default location baked into at least the java library and now our builds started to fail because it cannot be pulled anymore.

Note v0.3.0 is available via docker hub, but is not being pulled from there by certain versions of test containers.

Add a possibility to start Ryuk in a Network

Hi is it possible to start the container in a docker network?

The current problem is that, when I use Github Action and start a container it forces me into a docker network generated by Github Action and the self hosted runner in my company doesn't allow inter network communication. But Testcontainers (at least the Java lib) starts Ryuk in the default network and so it can't be reached anymore. Even when I start the docker image manually to build and test without a network it can't reach some other containers. Only when I start a new network and run the containers in it they can communicate.

Old GHA looked like that:

name: "Release Java Artifact"
on:
  workflow_dispatch:
jobs:
  build-push:
    name: Build and Push to Artifactory
    runs-on: self-hosted
    container:
      image: docker:dind # For example in our case a specialized image that contains everything to build
    steps:
      - name: Maven deploy
        run: |
          mvn -B install deploy:deploy --no-transfer-progress

My current solution is to start a network manually run the building container in it and give the network name as env variable to the container I start within the tests. But for Ryuk its currently not possible? Do you have a Idea how to solve that? I want to add the container to the deathnote 🗡️

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.