Code Monkey home page Code Monkey logo

dagger / dagger Goto Github PK

View Code? Open in Web Editor NEW
10.3K 296.0 547.0 50.9 MB

Application Delivery as Code that Runs Anywhere

Home Page: https://dagger.io

License: Apache License 2.0

Go 70.62% Dockerfile 0.05% JavaScript 0.03% Python 5.98% CSS 0.19% TypeScript 4.29% Shell 0.38% Groovy 0.42% Rust 10.21% HTML 0.45% Elixir 2.35% C# 0.47% Java 3.09% Smarty 0.05% PHP 1.35% PowerShell 0.08%
buildkit ci-cd containers continuous-delivery continuous-deployment continuous-integration deployment devops docker

dagger's Introduction

What is Dagger?

Dagger is a tool that lets you replace your software project's artisanal scripts with a modern API and cross-language scripting engine.

  1. Encapsulate all your project's tasks and workflows into simple functions, written in your programming language of choice
  2. Dagger packages your functions into a custom GraphQL API
  3. Run your functions from the CLI, your language interpreter, or a custom HTTP client
  4. Package your functions into a module, to reuse in your next project or share with the community
  5. Search the Daggerverse for existing modules, and import them into yours. All Dagger modules can reuse each other's functions - across language.

Benefits to app teams

  • Reduce complexity: even complex builds can be expressed as a few simple functions
  • No more "push and pray": everything CI can do, your dev environment can do too
  • Use the same language to develop your app and its scripts
  • Easy onboarding of new developers: if you can build, test and deploy - they can too.
  • Everything is cached by default: expect 2x to 10x speedups
  • Parity between dev and CI environments
  • Cross-team collaboration: reuse another team's workflows without learning their stack

Benefits to platform teams

  • Reduce CI lock-in. Dagger functions run on all major CI platforms - no proprietary DSL needed.
  • Don't be a bottleneck. Let app teams write their own functions. Enable standardization by providing them a library of reusable components.
  • Faster CI runs. CI pipelines that are "Daggerized" typically run 2x to 10x faster, thanks to caching and concurrency. This means developers waste less time waiting for CI, and you spend less money on CI compute.
  • A viable platform strategy. App teams need flexibility, and you need control. Dagger gives you a way to reconcile the two, in an incremental way that leverages the stack you already have.

Learn more

Join the community

Contributing

Interested in contributing or building dagger from scratch? See CONTRIBUTING.md.

dagger's People

Contributors

aluzzardi avatar brittandeyoung avatar crjm avatar dependabot[bot] avatar dolanor avatar dubo-dubon-duponey avatar gerhard avatar github-actions[bot] avatar grouville avatar helderco avatar jcsirot avatar jedevc avatar jlongtine avatar jpadams avatar kgb33 avatar kjuulh avatar kpenfound avatar marcosnils avatar matiasinsaurralde avatar mircubed avatar samalba avatar shykes avatar sipsma avatar slumbering avatar tjovicic avatar tomchv avatar verdverm avatar vikram-dagger avatar vito avatar wingyplus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dagger's Issues

--input comes "too late" and cannot override default values

Consider the following:

foo: string | *"default foo"

and running:

dagger compute . --input 'foo: "lalal"'

Result:

2:15PM FTL failed to compute error="input.cue: foo: conflicting values \"lalal\" and \"default foo\""

"Naive" expectation would be that it would work fine, and that "foo" would evaluate as "lalal".

#Copy marks src as optional, but it is not, and should default to the schema default

With this

		dagger.#Copy & {
			from: [
				dagger.#FetchContainer & { ref: alpine.ref },
			]
			dest: "/src"
		},

Dagger will complain that src is not specified.

The dagger.cue schema specifies

#Copy: {
	do:    "copy"
	from:  #Script | #Component
	src?:  string | *"/"
	dest?: string | *"/"
}

Maybe it should rather be?

#Copy: {
	do:    "copy"
	from:  #Script | #Component
	src:  string | *"/"
	dest: string | *"/"
}

Export: bool - is it working?

testbool: {
	bool

	#dagger: {
		compute: [
			{
				do: "fetch-container"
				ref: "alpine"
			},
			{
				do: "exec"
				args: ["sh", "-c", """
				printf "true" > /tmp/out
				"""
				]
				dir: "/"
			},
			{
				do: "export"
				// Source path in the container
				source: "/tmp/out"
				format: "bool"
			},
		]
	}
}

Will output:

7:39PM FTL failed to compute error="buildkit solve: task failed: execute op 3/3: invalid action: no \"do\" field\n"

#Mount do not appear to work

Can't make them work (regardless of type).

4:16PM ERR component failed error="execute op 2/3: empty disjunction\n" path=test
4:16PM FTL failed to compute error="buildkit solve:  task failed: execute op 2/3: empty disjunction\n\n"

Creating this issue to track that in disabled tests.

Question: --input and environment

The current --input mechanism is very satisfying - flexible and powerful.

Although, injecting environment variables is a bit cumbersome and requires some serious shell gymnastic.

Typically, if I have a cue file like this:

myenv: {
  foo: string | *"foo"
  bar: string | *"bar"
  baz: string | *"baz"
  ble: string | *"ble"
  lol: string | *"lol"
}


foo: dagger: #compute: {
   // lalalala
}

And if I want to be able to override "myenv" with corresponding environment variables, I will have to write a shell script in the line of:

envs=()
while read -r line; do
  if [ "${!line+x}" ]; then
    # mind the step!
    envs+=(--input "myenv: $line: \"$(printf "%s" "${!line}" | sed -E 's/"/\\"/')\"")
  fi
done <<<"foo
bar
baz
ble
lol"

dagger compute gros_cue "${envs[@]}" "$@"

Sure, it's "doable", but:

  • mere mortals will likely find it challenging to come up with the above (you have to: 1. check if the variable is defined or not, then 2. evaluate it, and 3. escape it properly, which is not that trivial)
  • if you want something nice like here and maintainable, you have to basically duplicate the list of "vars" you are interested in, on the cue side, and inside a wrapper script (I'm certainly not going to type the above ^^^ on the command line every time I want to run my stuff)

So, would you be open considering some additional sugar to ease things up?

Maybe it could be something like:

dagger compute cue --bind-env 'myenv'

The behavior would be exactly the same I have above ^:

  • for each property listed on the cue side under myenv, check if an environment variable by that name is defined or not
  • if it's defined, escape it and inject it into cue land under myenv

This proposal seem to alleviate both:

  • duplication / cost of maintenance
  • newcomer bear trap

wdyt?

Possible enhancements to export

Following conversation in https://discord.com/channels/707636530424053791/796905486145683506/799492277285552139

  1. it's mildly annoying having to create a file explicitly inside Exec

Ideally, the default would be to export stdout directly.

Something like:

{
    do: "exec"
    args: ["sh", "-c", """
    echo foo foo fluff
    """
   ]
},
{
    do: "export"
    format: "json"
},

While still retaining the possibility to point to a specific file through "source".

It would also be very nice to be able to export /dev/stderr (without having to create an additional file either) (eg: just pointing at source: /dev/stderr for example).

  1. currently, the default being "string" means a first, naive contact with export is counter-intuitive: it will not work right away, since people are expected to embed a string scalar in their top level object

What if the default was json instead? It would then marshal into the struct nicely without further ado.

  1. finally, it would be pretty rad to have the struct embedded type dictate the marshaling method (string, bool, number, struct) without having to specify "format".

Container Push Support

In theory, I have a container produced by dagger that matches my previous Dockerfile / bash setup. However, I have no way to run this.

  • How do I eject this image so that I can run it locally?
  • How can I push it to a container registry for use in my cluster?

Seems like there could be commands or operations for each.

Docker Image Metadata

#141 added some light support for image metadata

However, this is only half the battle.

If you were to do a FetchContainer followed by a PushContainer (?), all the metadata would be lost.

Regardless of using FetchContainer or not -- if we don't keep track of image metadata during pipeline execution, our resulting images cannot contain things such as "sticky" ENV (e.g. a docker run of an image produced by dagger will lack PATH altogether).

Flagging as priority/high for now, might decide to downgrade later.

Buildkit treats invalid image "successfully" if corresponding digest has already been pulled

This is not a dagger issue, but a buildkit one.

Basically, if busybox@sha256:e2af53705b841ace3ab3a44998663d4251d33ee8a9acaf71b66df4ae01c3bbe7 has previously been retrieved and cached by buildkit, any subsequent call to
WHATEVERLALAL@sha256:e2af53705b841ace3ab3a44998663d4251d33ee8a9acaf71b66df4ae01c3bbe7 will succeed (unexpectedly).

This should be reported to buildkit eventually.

This will make one the test fail (fetch-container/nonexistent/image-with-valid-digest), which is currently disabled.

Adding properties to a #Component or #ComponentConfig causes #Scripts to fail recognizing it as a #Component

Consider:

package testing

component: #dagger: {
	something: "THIS PROPERTY WILL WRECK HAVOC"
	compute: [{
		do: "fetch-container"
		ref: "alpine"
	}, {
		do: "exec"
		args: ["sh", "-c", """
		printf lol > /id
		"""]
		dir: "/"
	}]
}

test: {
	string

	#dagger: {
		compute: [
			{
				do: "load"
				from: component
			},
			{
				do: "export"
				source: "/id"
				format: "string"
			},
		]
	}
}

My guess is that this is not limited to "load".

I'm not quite sure my title describe accurately what's going on.

I'm also not quite sure whether this is intended/forbidden or not.

Bottom-line: it breaks :-)

DockerBuild logs are not contextual

We encode the cue path in LLB operations. Because call an external frontend for DockerBuild, and therefore we are not the ones generating the LLB, the contextual information is missing.

This needs investigating, a possible solution would be to use dockerfile2llb directly and alter the LLB, however there are drawbacks (e.g. dockerfile.v0 does more than dockerfile2llb, we'd have to reimplement that or we lose functionality. Also, the problem would persist for alternative Dockerfile syntaxes)

Cue in subdirs not passed or processed

Cue code can be split into multiple directories, however dagger has a filter which prevents subdirectories from being passed to buildkit. The following is valid and evaluates with cue cli, but errors in dagger with FTL failed to compute error="buildkit solve: base config: import failed: cannot find package \"example.com/app/sub\": ...

https://github.com/blocklayerhq/dagger/blob/main/dagger/input.go#L182


cue.mod/module.cue:

module: "example.com/app"

app.cue:

package app

import (
  "example.com/app/sub"
)
image: {
    string
    #dagger: {
        compute: [{
            do:  "fetch-container"
            ref: "nginx:stable"
        }, {
            do: "exec"
            args: ["sh", "-c", "ls /usr/share/nginx/html > \(sub.outdir)"]
            dir: "/"
            mount: {}
        }, {
            do:     "export"
            source: sub.outdir
            format: "string"
        }]
    }
}

sub/foo.cue:

package sub

outdir: "/out"

Question: what is the "right" way to consume cue values inside a shell script

cake: "1\"2\n3"

oven: {
  string

  #dagger: compute: [
    {
     do: "fetch-container"
      ref: "busybox"
    },
    {
      do: "exec"
      args: ["sh", "-c", """
      # Option 1
      # echo "$foo" > /out
      # Option 2
      # echo "\(cake)" > /out
      """]
      dir: "/"
      env: foo: cake
    },
    {
      do: "export"
      source: "/out"
      format: "string"
    }
  ]
}

Option 1 will "work" - option 2 will fail (obviously?).

Is it fair to assume that direct references to cue values from inside a script will break and should absolutely be avoided (option 2), and that only option 1 is the right way to go? Or is there a better way to do this?

Exec does not work without a mount

Exec without any explicit mount:

package acme

import (
	"dagger.cloud/dagger"
	"dagger.cloud/alpine"
)

let base=alpine & {
	package: {
		bash: ">3.0"
		rsync: true
	}
}


www: {

	source: {
		#dagger: compute: [
            dagger.#Load & { from: base },
			dagger.#Exec & {
			    args: ["sh", "-c", "echo", "foo"]
			}
		]
	}
}

Output:

[info] Running
buildkit solve:  task failed: execute op 2/2: value "#Mount" not found

Adding an empty placeholder mount workarounds the issue:

package acme

import (
	"dagger.cloud/dagger"
	"dagger.cloud/alpine"
)

let base=alpine & {
	package: {
		bash: ">3.0"
		rsync: true
	}
}


www: {

	source: {
		#dagger: compute: [
            dagger.#Load & { from: base },
			dagger.#Exec & {
			    args: ["sh", "-c", "echo", "foo"]
			    // XXX workaround exec not working without a mount
			    mount: foo: {}
			}
		]
	}
}

Issue with non concrete values and packages

The following will work just fine:

package acme

DANG: string

#dagger: {
	compute: [
		{
			do:  "fetch-container"
			ref: "alpine"
		},
	]
}

While DANG is not concrete, it is not a problem, since it is not referenced. The script will execute fine.

Now, put it in pkg submodule:

package alpine

DANG: string

#dagger: {
	compute: [
		{
			do:  "fetch-container"
			ref: "alpine"
		},
	]
}

And this in the calling scope:

package acme

import (
	"dagger.cloud/alpine"
)

#dagger: {
	compute: [
		{
			do: "load",
			from: alpine
		},
	]
}

Now, it complains about DANG not being concrete.

Am I missing something?
Any thoughts?

FWIW: I'm running e10025d

Difficulties with --input do: "local"

  1. Double quote fail?
./cmd/dagger/dagger compute --input "bar: #dagger: compute: [{do: \"local\", dir: \"$(pwd)/examples\", include: ["*"]}]" examples/tests/local

Will fail with:

10:02AM FTL failed to compute error="buildkit solve:  task failed: rpc error: code = NotFound desc = no access allowed to dir \"/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/examples\"\n"

So, it looks like double quotes are treated as if they were part of the dirname.

  1. Using single quotes gives a conflicting value between bytes and string?
./cmd/dagger/dagger compute --input "bar: #dagger: compute: [{do: \"local\", dir: '$(pwd)/examples', include: [\"*\"]}]" examples/tests/local

Will fail with:

10:03AM FTL failed to compute error="buildkit solve:  task failed: #Script.0: empty disjunction: 7 related errors:\n#Script.0.dir: conflicting values '/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/examples' and string (mismatched types bytes and string):\n    input.cue:1:44\n    spec.cue:56:54\n    spec.cue:68:12\n#Script.0.do: conflicting values \"copy\" and \"local\":\n    input.cue:1:30\n    spec.cue:56:63\n    spec.cue:111:8\n#Script.0.do: conflicting values \"exec\" and \"local\":\n    input.cue:1:30\n    spec.cue:56:46\n    spec.cue:80:6\n#Script.0.do: conflicting values \"export\" and \"local\":\n    input.cue:1:30\n    spec.cue:56:36\n    spec.cue:60:6\n#Script.0.do: conflicting values \"fetch-container\" and \"local\":\n    input.cue:1:30\n    spec.cue:56:6\n    spec.cue:100:7\n#Script.0.do: conflicting values \"fetch-git\" and \"local\":\n    input.cue:1:30\n    spec.cue:56:24\n    spec.cue:105:10\n#Script.0.do: conflicting values \"load\" and \"local\":\n    input.cue:1:30\n    spec.cue:56:71\n    spec.cue:75:8\n\n"

FWIW the problem is the same with surrounding single quotes and double quotes inside:

./cmd/dagger/dagger compute --input 'bar: #dagger: compute: [{do: "local", dir: "/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/examples", include: ["tests/test-lib.sh"]}]' examples/tests/local

local: dir: is not happy if the directory has a trailing slash

{
  do: "local"
  dir: "/somewhere/"
  include: []
},

Will fail with: "error="buildkit solve: task failed: rpc error: code = NotFound desc = no access allowed to dir "/somewhere/"\n"

Expectation is that a trailing slash should not matter I guess.

(note that this is with "local" in base config, and following conversation on Discord, it seems that local is not meant to be used that way but rather with --input from the cli)

#Copy: from: broken caching

Run this:

package testing

test: {
	string

	#dagger: {
		compute: [
			{
				do: "fetch-container"
				ref: "busybox"
			},
			{
				do: "copy"
				from: [{do: "fetch-container", ref: "busybox"}]
				src: "/etc/issue"
				dest: "/"
			},
			{
				do: "export"
				source: "/issue"
				format: "string"
			},
		]
	}
}

This will fail, as expected, since busybox does not have /etc/issue.

Now, edit it to:

package testing

test: {
	string

	#dagger: {
		compute: [
			{
				do: "fetch-container"
				ref: "busybox"
			},
			{
				do: "copy"
				from: [{do: "fetch-container", ref: "alpine"}]
				src: "/etc/issue"
				dest: "/"
			},
			{
				do: "export"
				source: "/issue"
				format: "string"
			},
		]
	}
}

Run again. It now works. Because alpine has /etc/issue.

Now, revert back to the first version.

It works while it should not (not sure why / if it's a buildkit caching issue).

This problem shows with both #Script or #Component in the "from" (I assume they are actually the same and being able to use a #Script is just syntactic sugar).

Example userland implementation of a better exec

There are a number of usability issues with exec overall

  • #74 - exit code from script is not captured / forwarded
  • #48 - dagger is not prescriptive on how arguments should be passed (eg: there are multiple different ways to shoot yourself)
  • #38 - if you want to consume stdout and/or stderr, you have to do all the legwork (this is not entirely trivial)
  • #27 - the current way to run a command from a shell properly is cumbersome ["sh", "-c", """"lalalal""", "--", "arg1"]

Below is an attempt at workarounding / alleviating these issues, purely in userland.

I'm posting it here for others to use if they find it useful, and as a conversation starter for a possible future "better" #Exec.

As a package:

package exec

import (
  // XXX point this to wherever dagger headers are residing
  "duponey.cloud/dagger"
)

command: string
arguments: [...string]

#Exec: dagger.#Exec & {
  args: ["bash", "-c", #"""
    set -o errexit -o errtrace -o functrace -o nounset -o pipefail

    catch() {
      local stdex=0
      local stdout=""
      local stderr=""

      printf "{\n" >&2

      stderr="$(
        {
          stdout="$("$@")";
        } 2>&1 || stdex="$?"
        printf '\t"stdex": %d,\n' "$stdex" >&2
        [ "$stdout" == "" ] && printf '\t"stdout": "",\n' >&2 || printf '\t"stdout": "%s",\n' "$(printf "%s" "$stdout" | sed -E 's/"/\\"/g' | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g')" >&2
      )"

      [ "$stderr" == "" ] && printf '\t"stderr": ""\n' >&2 || printf '\t"stderr": "%s"\n' "$(printf "%s" "$stderr" | sed -E 's/"/\\"/g' | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g')" >&2;

      printf "}" >&2
    }

    catch "$@" 2> /output
    """#, "--", command] + arguments
}

How to use In the calling scope:

import (
  "yourpackagefromabove/exec"
)

foo: {
  stdout: string
  stderr: string
  stdex: number

  #dagger: compute: [
    {
      do: "fetch-container"
      ref: "dubodubonduponey/debian"
    },
    (exec & {
      command: "echo"
      arguments: ["Plougastel"]
    }).#Exec,
    {
      do: "export"
      source: "/output"
      format: "json"
    }
  ]
}

Notes:

  • this replaces just exec - the actual export still has to be done, and marshalled into a top level object - while this is the most flexible and still allows compositing with other #Ops, a better solution would take care of the export as well
  • this is limited to single commands - ideally, this would also allow full-on scripts - not quite sure how to express that yet (UX)

Component chaining kills performance

Chaining components (e.g. referencing one component from another) is exponentially slow and CPU intensive.

Example with C --> B --> A:

A: #dagger: compute: []

B: #dagger: compute: [
	dagger.#FetchContainer & { ref: "alpine" },
	dagger.#Exec & {
		args: ["sh", "-c", "echo hello world"]
		mount: "/tmp/A": from: A
	}
]

C: #dagger: compute: [
	dagger.#FetchContainer & { ref: "alpine" },
	dagger.#Exec & {
		args: ["sh", "-c", "echo hello world"]
		mount: "/tmp/B": from: B
	}
]

D: #dagger: compute: [
	dagger.#FetchContainer & { ref: "alpine" },
	dagger.#Exec & {
		args: ["sh", "-c", "echo hello world"]
		mount: "/tmp/C": from: C
	}
]
  • The above takes 13 seconds to compute (CPU at full blast)
  • If I remove a layer (C mounts from A directly), it goes down to 1.3 seconds
  • If I add a layer (D which mounts from C), I don't know how long it takes, I killed it after 12 minutes of processing

Exec: unspecified env values cancel execution?

This will rightfully fail:

package testing

bar: string

#dagger: compute: [
	{
		do: "fetch-container"
		ref: "alpine"
	},
	{
		do: "exec"
		args: ["FAILPLEASE"]
		// env: foo: bar
		always: true
		dir: "/"
	},
]

Now, uncomment env: foo: bar.

This just silently succeeds (eg: the exec does not seem to run).

Sorry if I missed something obvious here.

Configurations should work without explicit `package` clause

Currently even a simple cue configuration cannot be evaluated without a package clause. It doesn’t matter what the package name is, it just needs to exist.

In the interest of reducing the barrier to entry, we should reduce the cognitive load of a basic working configuration to a minimum. Since this mandatory package clause does not convey any useful information, we should make it optional, and not use one in example configurations.

NOTE: when creating a reusable cue package with the goal of importing it, of course it should always have a package clause in order for import to work.

Caching does not refresh when it should?

Consider the following (case 1):

package testing

#dagger: compute: [
	{
		do: "fetch-container"
		ref: "whatever"
	},
	{
		do: "fetch-container"
		ref: "busybox"
	},
]

This will rightfully fail.

Now, fix it (case 2):

package testing

#dagger: compute: [
	{
		do: "fetch-container"
		ref: "alpine"
	},
	{
		do: "fetch-container"
		ref: "busybox"
	},
]

Run again.
Now it works rightfully.

Now change it again (case 3), back to the original version with the bogus image:

package testing

#dagger: compute: [
	{
		do: "fetch-container"
		ref: "whatever"
	},
	{
		do: "fetch-container"
		ref: "busybox"
	},
]

Run again.
Now, it succeeds.

It "kinda" make sense - because clearly the first fetch should be "optimized out" since it is "useless".

But in its current state, it's quite weird and very confusing to debug.

This is true also with exec (not limited to fetch-container), so, I assume buildkit is involved, and keeping an optimized / cached version of case 2, "optimizing out" the first fetch and not refreshing it with subsequent changes.

Maybe the right behavior would be for "case 3" to fail as well (and to invalidate the cache, since the image ref changed) - but at the very least, cases 1 & 3 should yield the same results.

Exec never outputs anything

With the following:

package acme

import (
	"dagger.cloud/dagger"
	"dagger.cloud/alpine"
)

let base=alpine & {
	package: {
		bash: ">3.0"
		rsync: true
	}
}


www: {

	source: {
		#dagger: compute: [
            dagger.#Load & { from: base },
			dagger.#Exec & {
			    args: ["sh", "-c", "echo", "shouldseeme"]
			    // XXX workaround exec not working without a mount
			    mount: foo: {}
			}
		]
	}

	host: string
}


And running

DEBUG=1 DOCKER_OUTPUT=1 dagger compute .

The string "shouldseeme" should be visible somewhere in the logs, right?

Support multi-arch Components

With the growing number of arm machines, it's critical to support multi-arch builds.

From the simple example in the repo:

#alpine: dagger.#Component & {
	version: "3.13.1@sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436"
	package: [string]: true | false | string
	#dagger: compute: [
		{
			do: "fetch-container"
			ref: "index.docker.io/alpine\(version)"
		},
		for pkg, info in package {
			if (info & true) != _|_ {
				do: "exec"
				args: ["apk", "add", "-U", "--no-cache", pkg]
			}
			if (info & string) != _|_  {
				do: "exec"
				args: ["apk", "add", "-U", "--no-cache", "\(pkg)\(info)"]
			}
		},
	]
}

Here the base image implicitly points to x86 because of the digest.

Then, a dagger component is likely to be arch specific, for instance:

#awsCli: {
	version: *"2.1.23" | string
	#dagger: compute: [
		dagger.#Load & {
			from: #alpine & {
				package: {
					jq: true
					bash: true
					curl: true
					"libc6-compat": true
				}
			}
		},
		dagger.#Exec & {
			args: ["bash", "-c", """
				cd /tmp; curl -sfL "https://awscli.amazonaws.com/awscli-exe-linux-x86_64-\(version).zip" -o aws.zip &&
				unzip aws.zip &&
				./aws/install
				rm -rf aws*
				"""
			]
		}
	]
}

This will only work on x86. We need a way to auto-select Components and based images depending on the host arch so we can write multi-arch components.

buildkit's Dockerfile is a good example: https://github.com/moby/buildkit/blob/master/Dockerfile

Cue is required for Dagger make. (MacOS/Linux)

Issue: Dagger will not complete make . cue appears to be a requirement to complete the tasks in the gen.sh. this error is recieved:

gen.sh: line 4: cue: command not found
dagger/compiler.go:1: running "sh": exit status 127

Similar Issue with an ubuntu (Linux host):

gen.sh: 4: gen.sh: cue: not found

Screenshot:

image

**Proposed solution(MacOS) **: brew install cuelang/tap/cue. or go get -u cuelang.org/go/cmd/cue added to manifest perhaps.

CPU spikes for compute

The way things are computed is quite expensive, for some reason. No idea what the root cause is.

As a reference, a simple from alpine; echo hello world used to take about 700ms and 30% CPU on the old runtime, whereas it's roughly 3-4 seconds and 176% CPU on dagger.

# BLR
bl-runtime apply -s main  0.10s user 0.04s system 23% cpu 0.634 total

# Dagger
DOCKER_OUTPUT=1 DEBUG=1 dagger compute .  6.57s user 0.36s system 176% cpu 3.918 total

The CPU spike seems to indicate there's heavy Cue computation involved, which might cause the slowdown. Possible reason for this is how Dagger matches specifications (e.g. #Exec), by using Fill() + Validate().

Connect input directories

dagger compute needs a flag to connect input directories.

Ideally the same flag could be used for 1) local directories, 2) remote git repositories 3) advanced filesystem operations like subdirectories, synthetic files etc. But, if a single flag for all inputs is too complex, perhaps there can be 2 different flags: one for the most simple and common sources (eg. local directory), and one for more advanced configurations (everything else).

Question: is there a way to ask dagger to process only one specific "target" in a cue file?

Consider:

foo: dagger: #compute: [
  // ... do something
]

bar: dagger: #compute: [
  // ... do something else
]

baz: dagger: #compute: [
  // ... do one last thing
]

Sometimes, foo takes time (for whatever reason), and also you want foo to always:true (like, an operation that would download / refresh a local cache to be then used indirectly by bar).

When debugging, it is desirable to have the ability to just run "bar" for example.

Is there / could there be a mechanism to ask dagger to execute only "bar" (without having to separate the tasks entirely in different directories)?

Maybe something like:

dagger compute myproject --target=bar

Error messages with imported files only mention the import point

I'm importing a package.

There is a problem inside that package file.

The error message:


1:13PM FTL failed to compute error="buildkit solve:  load base config: import failed: illegal character U+005C '\\':\n    /config/dagger.cue:4:3\n\n"
  • dagger.cue is the top-level file, not the file where the problem is
  • line 4, character 3 is where the problematic file has been imported

This is not a deal-killer bug, though it does make debugging pkgs significantly harder.

Duplicate execution with `always: true`

Consider the following.

I have one "component", that is expected to produce a random string.

Then, I have two additional components ("test1" and "test2") that are copying and exporting this string.

package testing

component: #dagger: compute: [{
	do: "fetch-container"
	ref: "alpine"
}, {
	do: "exec"
	args: ["sh", "-c", """
	tr -dc A-Za-z0-9 </dev/urandom | head -c 13 > /id
	"""]
	dir: "/"
	always: true
}]

test1: {
	string

	#dagger: {
		compute: [
			{
				do: "fetch-container"
				ref: "busybox"
			},
			{
				do: "copy"
				from: component
				src: "/id"
				dest: "/"
			},
			{
				do: "export"
				source: "/id"
				format: "string"
			},
		]
	}
}

test2: {
	string

	#dagger: {
		compute: [
			{
				do: "fetch-container"
				ref: "busybox"
			},
			{
				do: "copy"
				from: component
				src: "/id"
				dest: "/"
			},
			{
				do: "export"
				source: "/id"
				format: "string"
			},
		]
	}
}

I was (naively?) expecting that "component" would run only once (and that "always: true" would ensure that it will re-run on subsequent executions), but in fact it's actually running twice (eg: it's acting like it has been inlined into test1 and test2).

Is this what you guys wanted/expected?

I guess it's somewhat confusing coming from a "usual" language.

Concretely, in a programing language I was expecting:


component=$(tr -dc A-Za-z0-9 </dev/urandom | head -c 13)

test1=$(echo $component)

test2=$(echo $component)

And that's really different from:

test1=$(tr -dc A-Za-z0-9 </dev/urandom | head -c 13)

test2=$(tr -dc A-Za-z0-9 </dev/urandom | head -c 13)

Export not working for embedded "number"

Because number's definition is int | float, it does not work (dagger cannot find the script to execute)

I debugged a little and this is what dagger "sees":

 {
    int
    #dagger: {
          ...
} | {
    float
    #dagger: {
           ...
}

Lookup("#dagger").Exists() returns false

--input is racy (also: it does not seem to work as expected)

Here is a cue:

lol: string

something: {

  dagger: {
    string
    #compute: [
      {
        do: "fetch-container"
        ref: "alpine"
      },
      {
        do: "exec"
        args: ["sh", "-c", """
        echo "SHOW ME THE LOL: \(lol)" > /out
        """]
      },
      {
        do: "export"
        source: "/out"
      }
    ]
  }
}

Running: dagger compute --input 'lol: "LALALALALA"'

This may randomly yield:

1:

3:12PM ERR component failed error="validate op 2/3: #Op: empty disjunction: 7 related errors: (and 7 more errors)" path=res1.oven
3:12PM DBG [Env.Compute] processing path=res1.secrets
3:12PM DBG [Env.Compute] processing path=res2.secrets
3:12PM DBG cueflow task path=res2.oven state=Terminated
3:12PM DBG cueflow task: filling result path=res2.oven state=Terminated
3:12PM FTL failed to compute error="buildkit solve:  res2.oven.#dagger.compute.1.args.13: structural cycle\n"

Or 2:

3:15PM FTL failed to compute error="buildkit solve:  #Op: task failed: validate op 2/3: #Op: empty disjunction: 7 related errors: (and 7 more errors)\n"

Or it may "work" (but not really).
Note that in case it "works", it sill fails silently to execute the script (possible because of #28?).

Exec exit code is not bubbled up?

Consider:

package testing

#dagger: compute: [
	{
		do:  "fetch-container"
		ref: "alpine"
	},
	{
		do: "exec"
		args: ["sh", "-c", "exit 123"]
		dir: "/"
	},
]

dagger will exit with 1 (and not 123).

Unless there is a reason not to that I missed, IMHO dagger should bubble up the exit code, and exit with whatever the script returned.

This will also incidentally make integration testing much easier - in a context where execs may or may not run depending on concreteness.

fetch-container does not seem to do anything

www: {

	source: {
		#dagger: compute: [
			{
				do: "fetch-container"
				ref: "lalalalala",
			},
		]
	}
}

I would expect an error on this ^ (of course there is no such image as "lalalala").
Not sure if I'm doing something wrong...

External Code Support

Currently the only way to add code to a package is to embed it as a CUE string, which comes with many limitations:

  • Not pleasant to read and write (tedious indentation, no syntax highlighting for the embedded code)
  • Tooling doesn't work (e.g. shellcheck, dependabot, ...)
  • Prevents from splitting code into multiple files
  • Can't have a small development loop (e.g. run the script directly outside of dagger)

Depends on Embed files in a cue configuration #277

Component validation errors are unintelligible

Let's say that instead of do: "fetch-container" we mistakenly type op: "fetch-container", like so:

package main

A: #dagger: compute: [
	{ op: "fetch-container",  ref: "alpine" },
]

The generator error looks like this:

FTL failed to compute error="buildkit solve: invalid task: invalid component: #Component: 7 errors in empty disjunction:\n#Component: conflicting values bool and {#dagger:{compute:[{op:"fetch-container",ref:"alpine"}]}} (mismatched types bool and struct):\n /config/main.cue:3:4\n spec.cue:42:2\n#Component: conflicting values bytes and {#dagger:{compute:[{op:"fetch-container",ref:"alpine"}]}} (mismatched types bytes and struct):\n /config/main.cue:3:4\n spec.cue:42:32\n#Component: conflicting values float and {#dagger:{compute:[{op:"fetch-container",ref:"alpine"}]}} (mismatched types float and struct):\n /config/main.cue:3:4\n spec.cue:42:15\n#Component: conflicting values int and {#dagger:{compute:[{op:"fetch-container",ref:"alpine"}]}} (mismatched types int and struct):\n /config/main.cue:3:4\n spec.cue:42:9\n#Component: conflicting values string and {#dagger:{compute:[{op:"fetch-container",ref:"alpine"}]}} (mismatched types string and struct):\n /config/main.cue:3:4\n spec.cue:42:23\n#Component.#dagger.compute.0: 1 errors in empty disjunction:\n#Component.#dagger.compute.0: field op not allowed:\n /config/main.cue:3:1\n /config/main.cue:4:4\n spec.cue:36:1\n spec.cue:38:11\n spec.cue:49:12\n spec.cue:56:14\n spec.cue:59:6\n spec.cue:107:18\n\n"

Multiple `#dagger: compute` calls does not seem to work?

Is it allowed to have multiple #dagger: compute snippets in a single cue?

This:

package acme

import (
	"dagger.cloud/alpine"
	"dagger.cloud/dagger"
)

let base=alpine & {
	package: {
		bash: ">3.0"
		rsync: true
	}
}

www: {

	dosomething: {
		#dagger: compute: [
			dagger.#Load & { from: base },
		]
	}
}

bar: {
	dosomethingelse: {
		#dagger: compute: [
			dagger.#Load & { from: base },
		]
	}
}

Will output:

[info] Running
buildkit solve:  task failed: #Script.0: field `do` not allowed:
    /config/cue.mod/pkg/dagger.cloud/dagger/dagger.cue:74:2
    /config/simple.cue:29:4
    /config/simple.cue:29:19
    input.cue:1:1

Question about the UX for shell evaluation and strings

Basically, if you want to write anything (shell script) a bit fancy with variables, you have to do:

args: ["sh", "-c", """
   echo $foo
"""]

(both calling a shell, and using multi-lines)

None of the following will work as "intuitively" expected:

args: ["sh", "-c", "echo $foo"]
args: ["echo $foo"]
args: ["echo", "$foo"]
args: "echo $foo"
args: ["""
echo $foo
"""]

I (think I) understand the technical reasons for why it is that way (and also why it's important to accommodate for all use-cases wrt shell vs. not), but this might prove challenging for newcomers, and from a user experience is kind of boilerplate-y.

Also, unfortunately I do not have an alternative to propose right now, but thought I would open the question here.

Validation issue with optional fields

There is a "validation issue" (https://discord.com/channels/707636530424053791/796905486145683506/798725370219986966) that prevents any default value to be applied properly.

Below an example with exec, but this is true for any other

package test

#dagger: compute: [
    {
        do: "fetch-container"
        ref: "busybox"
    },
    {
        do: "exec"
        args: ["sh", "-c", """
            echo \(host) > /tmp/out
        """]
//                dir: "/"
    },
]

The output will be:

buildkit solve:  task failed: #Script.1: empty disjunction: 7 related errors:
#Script.1: field `dir` not allowed:

Expectation is that the optional "dir" can be omitted and would get the spec.cue default ("/").

“value is not executable” when loading a component from another package

dagger compute fails with “value is not executable” when a component A loads a component B with the load action, and component B is defined in another package.

For example this will fail:

import (
    “dagger.cloud/alpine”
)

// Component defined in another package
B: alpine & {
  package: bash: true
}

A: {
  #dagger: compute: [
    {
        do: “load”
        from: B // this will fail
    }
  ]
}

To reproduce with an old version of examples/simple which has this problem:

NOTE: you need an old version of the example config to reproduce, but you can use the current version of dagger.

$ BROKEN_EXAMPLE_COMMIT=6f4577d501d6be13852447ad9514535ca9a7fb8f
$ git checkout $BROKEN_EXAMPLE_COMMIT -- examples/simple/
$ dagger compute examples/simple/
10:40AM DBG Op.Walk v={}
10:40AM DBG finalized client config cfg={"Boot":"[{\"dir\":\"examples/simple/\",\"do\":\"local\",\"include\":[\"*.cue\",\"cue.mod\"]}]","BootDir":"exa
Input":""} localdirs={"examples/simple/":"examples/simple/"}
10:40AM INF running
10:40AM DBG spawning buildkit job attrs={"boot":"[{\"dir\":\"examples/simple/\",\"do\":\"local\",\"include\":[\"*.cue\",\"cue.mod\"]}]","input":""} ho
imple/":"examples/simple/"}
10:40AM DBG New Env boot="[{\"dir\":\"examples/simple/\",\"do\":\"local\",\"include\":[\"*.cue\",\"cue.mod\"]}]" input=
10:40AM DBG building cue configuration from boot state
10:40AM DBG Compiler.Build: processing path=/cue.mod
10:40AM DBG Compiler.Build: processing path=/cue.mod/module.cue
10:40AM DBG Compiler.Build: processing path=/cue.mod/pkg
10:40AM DBG Compiler.Build: processing path=/cue.mod/pkg/dagger.cloud
10:40AM DBG Compiler.Build: processing path=/cue.mod/pkg/dagger.cloud/alpine
10:40AM DBG Compiler.Build: processing path=/cue.mod/pkg/dagger.cloud/alpine/alpine.cue
10:40AM DBG Compiler.Build: processing path=/cue.mod/pkg/dagger.cloud/dagger
10:40AM DBG Compiler.Build: processing path=/cue.mod/pkg/dagger.cloud/dagger/dagger.cue
10:40AM DBG Compiler.Build: processing path=/simple.cue
10:40AM DBG loading input overlay
10:40AM DBG ENV base="{\"www\":{\"source\":{}}}" input={}
10:40AM DBG computing env
10:40AM DBG walking value="{\"www\":{\"source\":{}}}"
10:40AM DBG Env.Walk: processing path=
10:40AM DBG Env.Walk: processing path=www
10:40AM DBG Env.Walk: processing path=www.source
10:40AM DBG Env.Walk: processing path=www.listing
10:40AM DBG Env.Walk: processing path=www.host
10:40AM DBG Env.Walk: processing path=www.url
10:40AM DBG [Env.Compute] processing path=www.url
10:40AM DBG [Env.Compute] processing path=www.source
10:40AM DBG cueflow task path=www.source state=Terminated
10:40AM DBG cueflow task: filling result path=www.source state=Terminated
10:40AM DBG [Env.Compute] processing path=www.listing
10:40AM ERR component failed error="execute op 1/3: load: value is not executable" path=www.listing
10:40AM FTL failed to compute error="buildkit solve:  task failed: execute op 1/3: load: value is not executable\n"
$

Exec: dir: does not seem to work

package testing

#dagger: compute: [
	{
		do: "fetch-container"
		ref: "alpine"
	},
	{
		do: "exec"
		args: ["sh", "-c", """
			echo "pwd is: $(pwd)"
			[ "$(pwd)" == "/etc" ] || exit 1
		"""]
		dir: "/etc"
		always: true
	},
]

Export does not seem to work

With

// ACME platform: everything you need to develop and ship improvements to
// the ACME clothing store.
package acme

import (
	"dagger.cloud/dagger"
	"dagger.cloud/alpine"
)

let base=alpine & {
	package: {
		bash: ">3.0"
		rsync: true
	}
}


www: {

	source: {
		#dagger: compute: [
                         dagger.#Load & { from: base },
			dagger.#Exec & {
                args: ["sh", "-c", """
                    ls -l > /tmp/out
                    """,
                ]
			    // XXX workaround exec not working without a mount
			    mount: foo: {}
			},
			dagger.#Export & {
                source: "/tmp/out"
			}
		]
	}

	host: string
}

Will return:

[info] Running
buildkit solve:  task failed: execute op 3/3: export /tmp/out:  open /tmp/buildkit-mount962564487/tmp/out: no such file or directory

Panic

Just stumbled upon this.

Below is the simplest repro I could come-up with (ignore the fact this is non-sense - the point is that it should not panic).

Using: cc5a48d

In a subpackage, go with:

#E: dagger.#Exec & {
  script: string | *""
  arguments: [...string]
	if script != "" {
    args: arguments
  }
}

In your main script:

foo: #dagger: compute: [
  {
    do: "fetch-container"
    ref: "dubodubonduponey/debian"
  },
  dagger.#E & {
    script: "echo lol"
  }
]

Result:

#Exec2: field `script` not allowed:
    ./cue.mod/pkg/duponey.cloud/dagger/main.cue:59:5
    ./cue.mod/pkg/duponey.cloud/dagger/main.cue:45:8
    ./cue.mod/pkg/duponey.cloud/dagger/main.cue:55:9
    ./cue.mod/pkg/duponey.cloud/dagger/main.cue:56:3
10:02AM INF running
10:02AM DBG assembled boot script bootSource="[{\n\tdo:  \"local\"\n\tdir: \"/Users/dmp/Projects/Distribution/docker-images/docker-debian/hack\"\n\tinclude: [\"*.cue\", \"cue.mod\"]\n}]"
10:02AM DBG Op.Walk v={}
10:02AM DBG New Env boot="[{\n\tdo:  \"local\"\n\tdir: \"/Users/dmp/Projects/Distribution/docker-images/docker-debian/hack\"\n\tinclude: [\"*.cue\", \"cue.mod\"]\n}]" input=
10:02AM DBG building cue configuration from boot state
10:02AM DBG Compiler.Build: processing path=/cue.mod
10:02AM ??? #1 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #1 sha256:daa79f569a3538b5d573cb444dfae05c4672db96ff70070abdfdd9ca38a0a98e
10:02AM ??? #1 transferring /Users/dmp/Projects/Distribution/docker-images/docker-debian/hack: 915B done
10:02AM ??? #1 DONE 0.0s
10:02AM DBG Compiler.Build: processing path=/cue.mod/module.cue
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/cake
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/cake/cake
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/cake/cake/main.cue
10:02AM ???
10:02AM ??? #2 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #2 sha256:5fe9133f93a69ffbd2aa0897773fc33f2dd3686187d66ef01710b67a8eebc76a
10:02AM ??? #2 DONE 0.0s
10:02AM ???
10:02AM ??? #3 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #3 sha256:02352f08a2204716369281aa893de14d4557bb5bee954aa10805d6c81118b2c9
10:02AM ??? #3 DONE 0.0s
10:02AM ???
10:02AM ??? #4 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #4 sha256:719717cbf90d9efe0e555c33ddef6a1c4c5ffe002dfe978e84fb58e97532a7ae
10:02AM ??? #4 DONE 0.0s
10:02AM ???
10:02AM ??? #5 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #5 sha256:2ed0ff54d057fa8577b569fd871cdd9b50d01a45a393d9b8a1e7ecf64cf4a113
10:02AM ??? #5 DONE 0.0s
10:02AM ???
10:02AM ??? #6 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #6 sha256:615968a50b24d0a644dbb2efe42094cbd67a4b31c21981ef098ac18b9bd310a5
10:02AM ??? #6 DONE 0.0s
10:02AM ???
10:02AM ??? #7 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #7 sha256:a20691a6602a919ff8db47ee8ddd0fc36040b9df514c624a31d44b93c79ecbbc
10:02AM ??? #7 DONE 0.0s
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/cake/dubo
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/cake/dubo/main.cue
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/cake/oven
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/cake/oven/main.cue
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/dagger
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/dagger/main.cue
10:02AM ???
10:02AM ??? #8 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #8 sha256:4ffb0f62b6a429e9091249c4575077180950b9c21429edf0d59c685a6d6cae93
10:02AM ??? #8 DONE 0.0s
10:02AM ???
10:02AM ??? #9 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #9 sha256:bbff60981158361cb9a346d3a794a9fda3e3ebde50ff9220e0561b67f9fd0632
10:02AM ??? #9 DONE 0.0s
10:02AM ???
10:02AM ??? #10 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #10 sha256:d770d32ff49aec0d3ee31be5e15c67613932fe0e7f3e776674d09199639e303b
10:02AM ??? #10 DONE 0.0s
10:02AM ???
10:02AM ??? #11 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #11 sha256:46ba7c92b35f20e9a34b9fb446b880d75d9828c60b7e837471149e9ffb98a76b
10:02AM ??? #11 DONE 0.0s
10:02AM ???
10:02AM ??? #12 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #12 sha256:cd896b1611b16a870e51b7fbc39c8a41a6b87d8e4fc4a58c8944772a27d21fac
10:02AM ??? #12 DONE 0.0s
10:02AM ???
10:02AM ??? #13 local:///Users/dmp/Projects/Distribution/docker-images/docker-debian/hack
10:02AM ??? #13 sha256:d8cd51ab06f0a3ba945b12f8820aaa3ee58374d8e590799b57335062c51e24bf
10:02AM ??? #13 DONE 0.0s
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/debian
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/debian/main.cue
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/tools
10:02AM DBG Compiler.Build: processing path=/cue.mod/pkg/duponey.cloud/tools/main.cue
10:02AM DBG Compiler.Build: processing path=/dagger.cue
10:02AM DBG loading input overlay
10:02AM DBG ENV base="{\"foo\":{}}" input={}
10:02AM DBG computing env
10:02AM DBG walking value="{\"foo\":{}}"
10:02AM DBG Env.Walk: processing path=
10:02AM DBG Env.Walk: processing path=deb1
10:02AM DBG Env.Walk: processing path=deb2
10:02AM DBG Env.Walk: processing path=foo
10:02AM DBG [Env.Compute] processing path=foo
panic: assertion failed: invalid state for disjunct

goroutine 131 [running]:
cuelang.org/go/internal/core/adt.Assertf(...)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/internal/core/adt/context.go:54
cuelang.org/go/internal/core/adt.clone(0xc0006f3290, 0x9, 0xc0002ee380, 0x1, 0x0, 0x0, 0x1e74b40, 0x24cbd20, 0x0, 0xc0001df110, ...)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/internal/core/adt/disjunct.go:354 +0x645
cuelang.org/go/internal/core/adt.(*nodeContext).expandDisjuncts(0xc0002ee380, 0xc0006f3305, 0xc0002ee380, 0x0, 0x0)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/internal/core/adt/disjunct.go:134 +0x161e
cuelang.org/go/internal/core/adt.(*Unifier).Unify(0xc0006bbe28, 0xc0006bbe10, 0xc0006f33b0, 0x1c5fc05)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/internal/core/adt/eval.go:305 +0x4c5
cuelang.org/go/internal/core/adt.(*nodeContext).completeArcs(0xc000731880, 0x5)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/internal/core/adt/eval.go:567 +0x136
cuelang.org/go/internal/core/adt.(*nodeContext).postDisjunct(0xc000731880, 0x5)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/internal/core/adt/eval.go:538 +0x4fb
cuelang.org/go/internal/core/adt.(*nodeContext).expandDisjuncts(0xc000731880, 0xc0006f3205, 0xc000731880, 0x0, 0x0)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/internal/core/adt/disjunct.go:142 +0x10e8
cuelang.org/go/internal/core/adt.(*Unifier).Unify(0xc0006bbe28, 0xc0006bbe10, 0xc0006f3290, 0xc0006bbe05)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/internal/core/adt/eval.go:305 +0x4c5
cuelang.org/go/internal/core/adt.(*Vertex).Finalize(...)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/internal/core/adt/composite.go:405
cuelang.org/go/cue.Value.Unify(0xc00007e4f0, 0xc00056a2d0, 0xc00007e4f0, 0xc0004661b0, 0x1e9cbc0, 0xc0004661b0)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/cue/types.go:1778 +0xd7
cuelang.org/go/cue.Value.Fill(0xc00007e4f0, 0xc00056a2d0, 0x1d32e60, 0xc000820940, 0x0, 0x0, 0x0, 0xc0005cb750, 0x1af0b05)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/cue/types.go:1648 +0x205
dagger.cloud/go/dagger.(*Value).Fill(0xc00036b2a0, 0x1d1cfe0, 0xc00036b260, 0x0, 0x0)
	/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/dagger/value.go:45 +0xe5
dagger.cloud/go/dagger.Spec.Validate(0xc00059b0a0, 0xc00036b260, 0x1d3bdff, 0x7, 0x0, 0x0)
	/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/dagger/spec.go:18 +0x69
dagger.cloud/go/dagger.(*Value).Validate(0xc00036b260, 0xc000027848, 0x1, 0x1, 0xc00036b0a0, 0xc00036b240)
	/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/dagger/value.go:222 +0xd7
dagger.cloud/go/dagger.(*Script).Validate(...)
	/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/dagger/script.go:20
dagger.cloud/go/dagger.(*Value).Script(0xc00036b260, 0x1d434f9, 0xf, 0xc00036b260)
	/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/dagger/value.go:262 +0x92
dagger.cloud/go/dagger.(*Component).ComputeScript(0xc0005990c8, 0x0, 0x0, 0x0)
	/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/dagger/component.go:45 +0x76
dagger.cloud/go/dagger.(*Component).Execute(0xc0005990c8, 0x1e93e00, 0xc00025ff50, 0x0, 0x0, 0xc000354e60, 0x1b9f7c0, 0xc000820860, 0xc00071c000, 0x0, ...)
	/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/dagger/component.go:71 +0x65
dagger.cloud/go/dagger.(*Component).Compute(0xc0005990c8, 0x1e93e00, 0xc00025ff50, 0x1e97180, 0xc00004a200, 0x1e74e40, 0xc000438700, 0x0, 0x0, 0x0, ...)
	/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/dagger/component.go:56 +0x1e5
dagger.cloud/go/dagger.(*Env).Compute.func1(0x1e93e00, 0xc00025ff50, 0xc0005990c8, 0x1e74e40, 0xc000438700, 0x0, 0x0)
	/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/dagger/env.go:90 +0x116
dagger.cloud/go/dagger.(*Env).Walk.func2.1(0xc000438700, 0x0, 0x0)
	/Users/dmp/Projects/Go/src/github.com/blocklayerhq/dagger/dagger/env.go:198 +0x56
cuelang.org/go/tools/flow.RunnerFunc.Run(0xc0001de000, 0xc000438700, 0x0, 0x0, 0x0, 0x0)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/tools/flow/flow.go:124 +0x30
cuelang.org/go/tools/flow.(*Controller).runLoop.func1(0xc000438700)
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/tools/flow/run.go:71 +0x45
created by cuelang.org/go/tools/flow.(*Controller).runLoop
	/Users/dmp/Applications/bin/gvm/pkgsets/go1.15/global/pkg/mod/cuelang.org/[email protected]/tools/flow/run.go:70 +0x299

Container Push Support

Currently, there is no way to export an image within a dagger config (e.g. "docker push").

This is extremely limiting -- for instance, it's impossible to deploy a container since it needs to be pushed to a registry before being able to run somewhere.

#134 added support for #DockerBuild which is half of the solution.

The underlying limitation is it's not currently possible to export an image from within the buildkit frontend API.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.