Code Monkey home page Code Monkey logo

bin's Introduction

bin's People

Contributors

andrewedstrom avatar chendrix avatar clarafu avatar databus23 avatar evashort avatar jdeppe-pivotal avatar jmelchio avatar joshzarrabi avatar mariash avatar mhuangpivotal avatar osis avatar pivotal-bin-ju avatar shyx0rmz avatar topherbullock avatar vito avatar vmwghbot avatar xtreme-sameer-vohra avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bin's Issues

"Asset cli-artifacts not found" error while getting up ATC

Hello,

I was trying to spin up ATC using command:

concourse web ^
--basic-auth-username admin ^
--basic-auth-password admin ^
--session-signing-key session_signing_key ^
--tsa-host-key host_key ^
--tsa-authorized-keys authorized_worker_keys ^
--postgres-data-source postgres://atc_user:[email protected]/atc ^
--external-url http://10.122.111.111

Which resulted in "Asset cli-artifacts not found" error message.

Platform: Windows 2008 R2 x64 and Windows 10 x64
Concourse version: v2.5.1

In github "bin / ci / build-windows.bat" I can see that there should be another folder "cli-artifacts" with binaries. Is it the case? I can't find anything relevant in installation instructions.

Extra listening ports when running `concourse web`

When I start concourse web, and configure it to listen on localhost for TSA, and 8443 as the TLS_BIND_PORT, I see a few extra ports alive. I understand 2222 is for the TSA, and 8079 is the debug port, but what are the open listeners using ephemeral ports? They appear to both be HTTP servers, but I haven't been able to find documentation explaining what they do, or how to lock them down to specific bind addrs, or custom ports.

tcp        0      0 127.0.0.1:2222          0.0.0.0:*               LISTEN      3470/concourse
tcp        0      0 127.0.0.1:8079          0.0.0.0:*               LISTEN      3470/concourse
tcp6       0      0 :::41965                :::*                    LISTEN      3470/concourse
tcp6       0      0 :::8080                 :::*                    LISTEN      3470/concourse
tcp6       0      0 :::8443                 :::*                    LISTEN      3470/concourse
tcp6       0      0 :::35581                :::*                    LISTEN      3470/concourse

Failed to create container: containerizer: run pivotter: exit status 2

While trying run the Hello World pipeline using the Concourse Linux binary, the build failed with the error container: start: exit status 2. The following log seemed relevant

{
  "timestamp": "1459290852.796461344",
  "source": "garden-linux",
  "message": "garden-linux.container.start.command.failed",
  "log_level": 2,
  "data": {
    "argv": [
      "\/etc\/concourse\/linux\/depot\/g3fssdeeh6q\/start.sh"
    ],
    "error": "exit status 2",
    "exit-status": 2,
    "handle": "g3fssdeeh6q",
    "session": "22.1.1",
    "stderr": "Failed to create container: containerizer: run pivotter: exit status 2",
    "stdout": "",
    "took": "79.289046ms"
  }
}

System Information

  • Ubuntu 15.10 on AWS (no additional packages installed other than postgres)
  • Concourse Linux binary 1.0.0
  • local postgres 9.3 database
  • web and worker running on the same machine using the quick-start script

I'll be happy to help and provide more information if needed.

Make it installable with homebrew

A lot of Labs projects run CI on Mac Minis. If Concourse was installable with Homebrew, it'd be much easier to pick up and use for client projects.

Incompatible with side-by-side docker

I am trying to run concourse on a server that is also responsible for running docker containers.

When I run "concourse worker", any attempt to start a new docker container causes the docker daemon to emit the error "oci runtime error: could not synchronise with container process: no such file or directory"

The docker command then prints the error "Container command '...' not found or does not exist.."

The issue will remain even after the worker shuts down, and only gets repaired once the server is restarted.
Containers that were started before the concourse worker will remain running indefinitely.

Garden Linux is new to me, and I have no idea how to start debugging this... Any guidance would be appreciated.

build-cli-artifacts task missing

I'm trying to build concourse using ci/pipeline.yml, which refers to the build-cli-artifacts task, but the yml config for it is not in this repo.

worker always not working. don't know what reason. kernel already 3.19

{"timestamp":"1457021801.072731972","source":"baggageclaim","message":"baggageclaim.listening","log_level":1,"data":{"addr":"0.0.0.0:7788"}}
{"timestamp":"1457021801.081889391","source":"garden-linux","message":"garden-linux.failed-to-parse-pool-state","log_level":2,"data":{"error":"openning state file: open /opt/concourse/worker/linux/state/port_pool.json: no such file or directory"}}
{"timestamp":"1457021801.083236933","source":"garden-linux","message":"garden-linux.unsupported-graph-driver","log_level":1,"data":{"name":"overlay"}}
{"timestamp":"1457021801.083810329","source":"garden-linux","message":"garden-linux.retain.starting","log_level":1,"data":{"session":"10"}}
{"timestamp":"1457021801.083874702","source":"garden-linux","message":"garden-linux.retain.retained","log_level":1,"data":{"session":"10"}}
{"timestamp":"1457021801.151267529","source":"garden-linux","message":"garden-linux.metrics-notifier.starting","log_level":1,"data":{"interval":"1m0s","session":"13"}}
{"timestamp":"1457021801.151365995","source":"garden-linux","message":"garden-linux.metrics-notifier.started","log_level":1,"data":{"interval":"1m0s","session":"13","time":"2016-03-04T00:16:41.151363773+08:00"}}
{"timestamp":"1457021801.152077198","source":"garden-linux","message":"garden-linux.started","log_level":1,"data":{"addr":"0.0.0.0:7777","network":"tcp"}}


[dsxiao@localhost concourse-prod]$ uname -r
3.19.3-1.el7.elrepo.x86_64
sudo ./concourse worker \
--work-dir /opt/concourse/worker \ 
--peer-ip 10.3.11.20 \
--tsa-host 10.3.11.17 \
--tsa-public-key host_key.pub \
--tsa-worker-private-key worker_key

Feature Request: Ability to clean up failed builds via the UI

As described in #24, There are cases where large artefacts can be left around when builds fail. In my case, this caused subsequent builds to fail due to lack of disk apace. A method of cleaning up old builds manually between runs would be great so that disk space can be freed up.

Using upstream outputs as inputs is broken on darwin

Given the following tasks running on darwin, in this order:


---
platform: darwin

inputs:
- name: some-input-from-a-resource

outputs:
- name: some-output

run:
  path: some-input-from-a-resource/some-script

---
platform: darwin

inputs:
- name: some-input-from-a-resource
- name: some-output

run:
  path: some-input-from-a-resource/some-other-script

The bottom task will fail to run because some-output has not been mounted in the container. It does not fail as you might expect, which is missing inputs: some-output.

Using the exact same tasks/scripts on a linux worker was successful.

There were no errors in the garden log associated with this.

[support] guardian inside docker cannot access docker-local DNS

Garden containers cannot resolve DNS names when the worker is ran inside Docker via docker-compose, which supplies its own DNS server on a loopback address for resolving the names of other containers.

On the UI side, it simply says no versions are available, but logs are showing error 500 on check, and hijacking the check container shows that all DNS lookups are being routed to the docker container's (local?) DNS of 127.0.0.11, and failing with "connection refused".

Is there a way to manually supply the DNS server for garden?
Even better, is there a way to have garden use the host's network directly, so that it will have access to additional dns names managed by docker?

Additional details:

Problem occurs on both v1.3.0-rc.9 and v1.3.0-rc.35

Relevant bits of the pipeline

resource_types:
- name: svn-resource
  type: docker-image
  source:
    repository: robophred/concourse-svn-resource
    tag: alpha

resources:
- name: src
  type: svn-resource
  source:
    repository: {{repository}}
    trust_server_cert: true
    username: {{username}}
    password: {{password}}

Hijacking the check container, it seems to be set up for docker-image. I manually sent a request to check "svn-resource":

/opt/resource # ./check
./check
{"source":{"repository":"robophred/concourse-svn-resource","tag":"alpha"}}
{"source":{"repository":"robophred/concourse-svn-resource","tag":"alpha"}}

failed to ping registry: 2 error(s) occurred:

* ping https: Get https://registry-1.docker.io/v2: dial tcp: lookup registry-1.docker.io on 127.0.0.11:53: read udp 127.0.0.1:53962->127.0.0.11:53: read: connection refused
* ping http: Get http://registry-1.docker.io/v2: dial tcp: lookup registry-1.docker.io on 127.0.0.11:53: read udp 127.0.0.1:36771->127.0.0.11:53: read: connection refused

More exploring shows that all dns resolution is aimed at 127.0.0.11, and failing. However, I can still ping by IP.

/opt/resource # nslookup www.google.com
nslookup www.google.com
Server:    127.0.0.11
Address 1: 127.0.0.11

nslookup: can't resolve 'www.google.com'
/opt/resource #
/opt/resource # ping www.google.com
ping www.google.com
ping: bad address 'www.google.com'
/opt/resource #
/opt/resource # ping 8.8.8.8
ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=52 time=11.523 ms
64 bytes from 8.8.8.8: seq=1 ttl=52 time=12.456 ms

Connecting to the docker container, network access works just fine

root@cc9867a25143:~/concourse# cat /etc/resolv.conf
search mycompany.com
nameserver 127.0.0.11
options ndots:0
root@cc9867a25143:~/concourse# ping www.google.com
PING www.google.com (216.58.193.196) 56(84) bytes of data.
64 bytes from lax02s23-in-f4.1e100.net (216.58.193.196): icmp_seq=1 ttl=53 time=10.9 ms
64 bytes from lax02s23-in-f4.1e100.net (216.58.193.196): icmp_seq=2 ttl=53 time=10.9 ms

Worker Dockerfile:

FROM ubuntu:14.04

RUN \
  apt-get update && \
  apt-get -y install \
    iptables \
    quota \
    ulogd \
    curl \
  && \
  apt-get clean

RUN \
  mkdir -p /root/concourse && \
  cd /root/concourse && \
  curl -OL https://github.com/concourse/bin/releases/download/v1.3.0-rc.35/concourse_linux_amd64 && \
  chmod +x concourse_linux_amd64

WORKDIR /root/concourse

COPY concourse-worker-exec .
COPY host_key.pub .
COPY worker_key .
RUN chmod +x concourse-worker-exec

ENTRYPOINT /root/concourse/concourse-worker-exec

concourse-worker-exec

#!/bin/sh

set -e

mkdir -p /tmp/concourse-workdir

mkdir /tmp/concourse-workdir/graph
mount -t tmpfs none /tmp/concourse-workdir/graph

mkdir /tmp/concourse-workdir/overlays
mount -t tmpfs none /tmp/concourse-workdir/overlays

./concourse_linux_amd64 worker \
  --work-dir /tmp/concourse-workdir \
  --tsa-host "${TSA_HOST}" \
  --tsa-public-key host_key.pub \
  --tsa-worker-private-key worker_key

Windows worker cannot connect after upgrading to 3.14.1

We have been using concourse binary for windows worker. After upgrading from concourse 3.9.2 to 3.14.1, concourse workers cannot connect to TSA

“worker”,“message”:“worker.setup.no-assets”,“log_level”:1,“data”:{“session”:“1”}}
{“timestamp”:“1528724301.973331690”,“source”:“worker”,“message”:“worker.sweep-starting”,“log_level”:1,“data”:{}}
{“timestamp”:“1528724301.973331690”,“source”:“worker”,“message”:“worker.reaper.reaper-server.started-reaper-process”,“log_level”:1,“data”:{“garden-addr”:“127.0.0.1:7777”,“server-port”:“7799”,“session”:“5.1”}}
{“timestamp”:“1528724301.973331690”,“source”:“worker”,“message”:“worker.beacon.beacon.beacon-client.failed-to-connect-to-tsa”,“log_level”:2,“data”:{“error”:“dial tcp: address : missing port in address”,“session”:“4.1.1”}}
{“timestamp”:“1528724301.973331690”,“source”:“worker”,“message”:“worker.beacon.restarting”,“log_level”:2,“data”:{“error”:“failed to connect to TSA”,“session”:“4”}}
{“timestamp”:“1528724301.974332094”,“source”:“worker”,“message”:“worker.garden.started”,“log_level”:1,“data”:{“session”:“2”}}

We have been starting the concourse worker using cmd:
d:\concourse\concourse.exe worker /work-dir d:\concourse\work /tsa-host /tsa-public-key d:\concourse\keys\tsa_host_key.pub /tsa-worker-private-key d:\concourse\keys\worker_key >> d:\concourse\logs\worker-log.txt

We use BOSH deployment for Concourse web and Linux workers.
Is there any configuration change that needs to be done?

Concourse workers are not removed from ATC when shutting down

When I shut down a worker run via concourse worker (v3.6.0), it is not removed from the list of workers, nor are container references cleaned up. Adding it back to the pool with the same name introduces a situation where concourse thinks the worker is alive and has valid containers, when in fact they do not exist. If I change the name of the worker each time it starts up, I get stale workers piling up, and concourse seems to have trouble rebuilding existing containers on the new worker (though maybe i wasn't patient enough).

I'm shutting down concourse with SIGTERM, but have also tried SIGINT, and the behavior did not change. I was under the impression that upon terminating, the concourse worker would notify the ATC that it's leaving the worker pool, and references to it and its containers should be removed. Is this not the case, or is there a bug here?

tagged version of released Concourse

All the tags currently have rc attached to them. It is safe to assume that the last rc is the released version, but it would be niced to validate that.

For example, v0.74.0-rc.12 should also be tagged v0.74.0.

Right?

Depends on non-existing package github.com/concourse/bin/bindata

I tried to build concourse cmd from this repo but it failed with a non-existing dependency

go get -u -v github.com/concourse/bin/cmd/concourse
... (lots of lines of deps being pulled)
package github.com/concourse/bin/bindata: cannot find package "github.com/concourse/bin/bindata" in any of:
    /usr/local/go/src/github.com/concourse/bin/bindata (from $GOROOT)
    /Users/simon/src/github.com/concourse/bin/bindata (from $GOPATH)

Am I missing something?

Issue with worker in v2.6.1-rc.50

I am using v2.6.1-rc.50 workers dies without any error message but I see following message in web

{"timestamp":"1486506949.248625994","source":"tsa","message":"tsa.connection.tcpip-forward.failed-to-accept","log_level":2,"data":{"error":"accept tcp [::]:43937: use of closed network connection","remote":"10.40.37.203:47808","session":"2.2"}}
{"timestamp":"1486506949.248890638","source":"tsa","message":"tsa.connection.tcpip-forward.failed-to-accept","log_level":2,"data":{"error":"accept tcp [::]:40722: use of closed network connection","remote":"10.40.37.203:47808","session":"2.3"}}
{"timestamp":"1486506949.259580374","source":"tsa","message":"tsa.connection.channel.forward-worker.wait-for-process.failed-to-close-channel","log_level":2,"data":{"error":"EOF","remote":"10.40.37.203:47808","session":"2.1.1.4"}}

{"timestamp":"1486563375.154497623","source":"tsa","message":"tsa.connection.tcpip-forward.failed-to-accept","log_level":2,"data":{"error":"accept tcp [::]:38497: use of closed network connection","remote":"10.40.37.37:34776","session":"1.2"}}
{"timestamp":"1486563375.154541016","source":"tsa","message":"tsa.connection.channel.forward-worker.wait-for-process.failed-to-close-channel","log_level":2,"data":{"error":"EOF","remote":"10.40.37.37:34776","session":"1.1.1.4"}}

Darwin Worker: "error":"failed to dial: failed to construct client connection: ssh: handshake failed: remote host public key mismatch"

After a network change and a vagrant halt; vagrant up and trying to restart the concourse darwin worker I'm getting an ssh public key mismatch.

./concourse_darwin_amd64 worker \
  --work-dir /opt/concourse/worker \
  --tsa-host 127.0.0.1 \
  --tsa-public-key host_key.pub \
  --tsa-worker-private-key worker_key

Log

{"timestamp":"1498767106.452639341","source":"worker","message":"worker.setup.no-assets","log_level":1,"data":{"session":"1"}}
{"timestamp":"1498767106.453021526","source":"worker","message":"worker.garden.started","log_level":1,"data":{"session":"2"}}
{"timestamp":"1498767106.453067780","source":"baggageclaim","message":"baggageclaim.listening","log_level":1,"data":{"addr":"127.0.0.1:7788"}}
{"timestamp":"1498767106.476364374","source":"worker","message":"worker.beacon.restarting","log_level":2,"data":{"error":"failed to dial: failed to construct client connection: ssh: handshake failed: remote host public key mismatch","session":"4"}}
{"timestamp":"1498767111.486412764","source":"worker","message":"worker.beacon.restarting","log_level":2,"data":{"error":"failed to dial: failed to construct client connection: ssh: handshake failed: remote host public key mismatch","session":"4"}}
{"timestamp":"1498767116.494427204","source":"worker","message":"worker.beacon.restarting","log_level":2,"data":{"error":"failed to dial: failed to construct client connection: ssh: handshake failed: remote host public key mismatch","session":"4"}}
{"timestamp":"1498767121.507308960","source":"worker","message":"worker.beacon.restarting","log_level":2,"data":{"error":"failed to dial: failed to construct client connection: ssh: handshake failed: remote host public key mismatch","session":"4"}}
...

Preceeding steps I did:

  1. Creating a Darwin worker:

    • Generate a worker key

      ssh-keygen -t rsa -f worker_key -N ''
      
    • Create worker directory.

      mkdir /opt/concourse/worker
      
  2. Start concourse by running vagrant up from the working directory.

  3. SSH into the vagrant box. Ensure that you have already run vagrant up previously.

    ```
    vagrant ssh
    ```
    
    • Append the contents of worker_key.pub to the authorized_worker_keys file. This assumes that Vagrant has mounted your working directory as /vagrant. You need to be a root user in order to modify the authorized_worker_keys file.

      sudo su
      cat /vagrant/worker_key.pub >> /opt/concourse/authorized_worker_keys
      exit
      
    • Copy host_key.pub to the mounted working directory folder.

      cp /opt/concourse/host_key.pub /vagrant/
      
    • Restart web and worker services.

      sudo service concourse-web restart
      sudo service concourse-worker restart
      

Keys are identical

Via vagrant ssh I ran a hash on the host_key.pub on the macOS instance and vagrant vm located in /opt/concourse and they are the same. I removed the hosts from ~/.ssh/known_hosts as well but I am at a loss.

"no space left on device" error executing build

Hello,

Wasn't sure what the correct spot for this was, so I'm trying here.

I'm trying to run some tests using a new Java container and it can't finish pulling down the container as it's reporting a "no space left on device" error writing to /var/lib/docker/tmp/.

Running df -h inside the worker containers, I can see that the / and /etc/hosts mounts are completely full and have 71G allocated to them. Should concourse be cleaning up these directories? The latter, /etc/hosts, is weird because looking in the container it appears to be a small file. The former, /, appears to be mostly affected by the /worker-state directory. Under there, there appears to be a number of "live" volumes with 2 being 22 and 30G in size.

If concourse is not supposed to clean these up, when why is this filling up and how do I manage the space without having to restart the worker containers periodically? Yes, I could mount a volume, but that really only delays the inevitable.

Document building concourse-bin

You have some scripts for building, but they have some fairly baked in assumptions.

I tried building on Mac OS X and kept getting errors. I tried to replicate expected paths and such as found in those scripts but after a while, just let it go. I noticed the build-linux assumes and apt-get based system as well.

I'd be happy to help write up docs if I could get a build going. 👷

TSA ignores log level and floods network

Hi! Looks like TSA is ignoring --log-level and floods log with debugging messages:

{"timestamp":"1488811617.190356493","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.io-complete","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152123"}}
{"timestamp":"1488811617.190447092","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.done-waiting","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152123"}}
{"timestamp":"1488811617.190531492","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.closing-forwarded-tcpip","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152123"}}
{"timestamp":"1488811617.193816662","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.waiting-for-tcpip-io","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152124"}}
{"timestamp":"1488811617.194487333","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.io-complete","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152124"}}
{"timestamp":"1488811617.194570780","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.done-waiting","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152124"}}
{"timestamp":"1488811617.194652319","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.closing-forwarded-tcpip","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152124"}}
{"timestamp":"1488811617.195272446","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.waiting-for-tcpip-io","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152125"}}
{"timestamp":"1488811617.217427731","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.io-complete","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152125"}}
{"timestamp":"1488811617.217528343","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.done-waiting","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152125"}}
{"timestamp":"1488811617.217605352","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.closing-forwarded-tcpip","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152125"}}
{"timestamp":"1488811617.221923351","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.waiting-for-tcpip-io","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152126"}}
{"timestamp":"1488811617.222908258","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.io-complete","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152126"}}
{"timestamp":"1488811617.223042965","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.done-waiting","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152126"}}
{"timestamp":"1488811617.223119497","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.closing-forwarded-tcpip","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152126"}}
{"timestamp":"1488811617.223601103","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.waiting-for-tcpip-io","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152127"}}
{"timestamp":"1488811617.224380016","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.io-complete","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152127"}}
{"timestamp":"1488811617.224458456","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.done-waiting","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152127"}}
{"timestamp":"1488811617.224538565","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.closing-forwarded-tcpip","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152127"}}
{"timestamp":"1488811617.227306128","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.waiting-for-tcpip-io","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152128"}}
{"timestamp":"1488811617.227980137","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.io-complete","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152128"}}
{"timestamp":"1488811617.228060484","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.done-waiting","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152128"}}
{"timestamp":"1488811617.228135109","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.closing-forwarded-tcpip","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152128"}}
{"timestamp":"1488811617.228696585","source":"tsa","message":"tsa.connection.tcpip-forward.forward-conn.waiting-for-tcpip-io","log_level":0,"data":{"remote":"x.x.x.x:58068","session":"1.3.152129"}}

High rate of TSA messages fill log buffers really quick.
It also looks like something is broken in TSA itself, it generates insane amount of quickly dying connections:


$ sudo netstat -anp --inet --inet6 | grep TIME_WAIT | wc -l
9814

It started with 2.6.0. No changes since then, in 2.7.0 too.

# uname -a
Linux concourse 4.4.0-65-generic #86-Ubuntu SMP Thu Feb 23 17:49:58 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

# lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 16.04.2 LTS
Release:	16.04
Codename:	xenial

# ./concourse --version
2.7.0

Failure to build docker image

make[2]: Entering directory '/tmp/tar-1.28/tests'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/tmp/tar-1.28/tests'
make[2]: Entering directory '/tmp/tar-1.28'
make[2]: Leaving directory '/tmp/tar-1.28'
make[1]: Leaving directory '/tmp/tar-1.28'
rm: cannot remove 'tar-1.28/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3/confdir3': File name too long
The command '/bin/sh -c cd /tmp && curl https://ftp.gnu.org/gnu/tar/tar-1.28.tar.gz | tar zxf - &&       cd tar-1.28 &&         FORCE_UNSAFE_CONFIGURE=1 ./configure &&         make LDFLAGS=-static &&         cp src/tar /opt/static-assets/tar &&       cd .. &&       rm -rf tar-1.28' returned a non-zero code: 1

TSA Logging not honored by environment variables

I keep seeing non-error messages in my logs from the TSA worker heartbeat processes, despite having set log levels to error using all the env vars available to me. It seems like maybe the concourse worker process needs to be made aware of the CONCOURSE_TSA_LOG_LEVEL setting, since the process spewing these out is /usr/bin/concourse worker --name 38f7dbcc-8948-48a7-bac2-cd4e5fbd02bc

Oct 30 17:59:10 ci.spruce.cf bash: {"timestamp":"1509400749.957183838","source":"tsa","message":"tsa.connection.channel.forward-worker.heartbeat.start","log_level":1,"data":{"remote":"127.0.0.1:44464","session":"1.1.1.2939","worker-address":"127.0.0.1:41965","worker-platform":"linux","worker-tags":""}}
Oct 30 17:59:10 ci.spruce.cf bash: {"timestamp":"1509400749.964937210","source":"tsa","message":"tsa.connection.channel.forward-worker.heartbeat.reached-worker","log_level":0,"data":{"baggageclaim-took":"4.809423ms","garden-took":"2.729756ms","remote":"127.0.0.1:44464","session":"1.1.1.2939"}}
Oct 30 17:59:10 ci.spruce.cf bash: {"timestamp":"1509400749.974787951","source":"tsa","message":"tsa.connection.channel.forward-worker.heartbeat.done","log_level":1,"data":{"remote":"127.0.0.1:44464","session":"1.1.1.2939","worker-address":"127.0.0.1:41965","worker-platform":"linux","worker-tags":""}}
CONCOURSE_LOG_LEVEL=error
CONCOURSE_TSA_LOG_LEVEL=error
CONCOURSE_BAGGAGECLAIM_LOG_LEVEL=error
CONCOURSE_GARDEN_LOG_LEVEL=error

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.