dragonflyoss / nydus Goto Github PK
View Code? Open in Web Editor NEWNydus - the Dragonfly image service, providing fast, secure and easy access to container images.
Home Page: https://nydus.dev/
License: Apache License 2.0
Nydus - the Dragonfly image service, providing fast, secure and easy access to container images.
Home Page: https://nydus.dev/
License: Apache License 2.0
Now release only provides x86_64 binaries, aarch64 is also needed.
Related to kata-containers/tests#4445
Errors like DaemonError
, RafsError
can be simplified with the abilities provided in thiserror
crate.
For instance, display, from can be derived directly if using thiserror.
Per Lz4 project page
LZ4 is also compatible with dictionary compression, both at API and CLI levels. It can ingest any input file as dictionary, though only the final 64KB are used. This capability can be combined with the Zstandard Dictionary Builder, in order to drastically improve compression performance on small files.
Since we're using crate 'lz4-sys' for compressing & decompressing chunk data and more important, chunk data is up to 1MB which is considered as small data set for lz4 alike, it is likely that nydus can benefit a lot from applying compression dictionary.
Lz4 has offered some support at API level, we need to look into how to enable it for rafs code.
使用下面的命令进行镜像转换
/root/Nydus/bin/nydusify convert 、
--containerd-sock /run/docker/containerd/docker-containerd.sock 、
--source-auth bmV3bGFuZDpOZXdsYW5kQDEyMw==
--nydus-image /root/Nydus/bin/nydus-image
--source 172.32.150.15:80/nydus/nginx_tyt:1.0.0
--target 172.32.150.15:80/paas_public/nginx_tyt:1.0.0-nydus
--source-insecure true
已经设置的--source-insecure标志为true,但是仍然报了下面的错误
INFO[2020-11-05T18:06:57+08:00] Pulling image 172.32.150.15:80/nydus/nginx_tyt:1.0.0 with platform linux/amd64
FATA[2020-11-05T18:06:57+08:00] pull source image: pull image: failed to resolve reference "172.32.150.15:80/nydus/nginx_tyt:1.0.0": failed to authorize: failed to fetch oauth token: Post "https://172.32.150.15/service/token": x509: certificate signed by unknown authority
是这个标志未生效嘛?
In order to support container OS upgrade, each increased blob stores the data difference of previous container OS images.
However, the number of such blobs can finally be very huge which impacts performance and even cannot work properly anymore, so we need to decrease the number of total blobs.
When running ctr-remote image rpull --plain-http ghcr.io/dragonflyoss/image-service/alpine:nydus-latest
, it panicked:
DEBU[0000] fetching image="ghcr.io/dragonflyoss/image-service/alpine:nydus-latest"
DEBU[0000] resolving host=ghcr.io
DEBU[0000] do request host=ghcr.io request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=containerd/1.5.8+unknown request.method=HEAD url="https://ghcr.io/v2/dragonflyoss/image-service/alpine/manifests/nydus-latest"
DEBU[0000] fetch response received host=ghcr.io response.header.content-length=73 response.header.content-type=application/json response.header.date="Fri, 21 Jan 2022 03:35:37 GMT" response.header.www-authenticate="Bearer realm=\"https://ghcr.io/token\",service=\"ghcr.io\",scope=\"repository:dragonflyoss/image-service/alpine:pull\"" response.header.x-github-request-id="3B5E:0B9B:57103:185732:61EA2A08" response.status="401 Unauthorized" url="https://ghcr.io/v2/dragonflyoss/image-service/alpine/manifests/nydus-latest"
DEBU[0000] Unauthorized header="Bearer realm=\"https://ghcr.io/token\",service=\"ghcr.io\",scope=\"repository:dragonflyoss/image-service/alpine:pull\"" host=ghcr.io
DEBU[0000] do request host=ghcr.io request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=containerd/1.5.8+unknown request.method=HEAD url="https://ghcr.io/v2/dragonflyoss/image-service/alpine/manifests/nydus-latest"
DEBU[0001] fetch response received host=ghcr.io response.header.content-length=1052 response.header.content-type=application/vnd.oci.image.manifest.v1+json response.header.date="Fri, 21 Jan 2022 03:35:37 GMT" response.header.docker-content-digest="sha256:6188fc0d382e64fff54f936a9211e20ad6f6839b17171cc3d2faedae2463de9d" response.header.docker-distribution-api-version=registry/2.0 response.header.etag="\"sha256:6188fc0d382e64fff54f936a9211e20ad6f6839b17171cc3d2faedae2463de9d\"" response.header.x-github-request-id="3B5E:0B9B:57106:185737:61EA2A09" response.status="200 OK" url="https://ghcr.io/v2/dragonflyoss/image-service/alpine/manifests/nydus-latest"
DEBU[0001] resolved desc.digest="sha256:6188fc0d382e64fff54f936a9211e20ad6f6839b17171cc3d2faedae2463de9d" host=ghcr.io
fetching sha256:6188fc0d... application/vnd.oci.image.manifest.v1+json
DEBU[0001] fetch digest="sha256:6188fc0d382e64fff54f936a9211e20ad6f6839b17171cc3d2faedae2463de9d" mediatype=application/vnd.oci.image.manifest.v1+json size=1052
DEBU[0001] do request digest="sha256:6188fc0d382e64fff54f936a9211e20ad6f6839b17171cc3d2faedae2463de9d" mediatype=application/vnd.oci.image.manifest.v1+json request.header.accept="application/vnd.oci.image.manifest.v1+json, */*" request.header.user-agent=containerd/1.5.8+unknown request.method=GET size=1052 url="https://ghcr.io/v2/dragonflyoss/image-service/alpine/manifests/sha256:6188fc0d382e64fff54f936a9211e20ad6f6839b17171cc3d2faedae2463de9d"
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x63 pc=0x7f424de1c448]
runtime stack:
runtime.throw(0x1136a33, 0x2a)
/usr/local/go/src/runtime/panic.go:1116 +0x72
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:726 +0x4ac
goroutine 16 [syscall]:
runtime.cgocall(0xe67570, 0xc0005345c0, 0xc000010108)
/usr/local/go/src/runtime/cgocall.go:133 +0x5b fp=0xc000534590 sp=0xc000534558 pc=0x4056bb
net._C2func_getaddrinfo(0xc000210400, 0x0, 0xc0003a3e00, 0xc000010108, 0x0, 0x0, 0x0)
_cgo_gotypes.go:94 +0x55 fp=0xc0005345c0 sp=0xc000534590 pc=0x6093d5
net.cgoLookupIPCNAME.func1(0xc000210400, 0x8, 0x8, 0xc0003a3e00, 0xc000010108, 0x0, 0xc0005346a0, 0x60c8f2)
/usr/local/go/src/net/cgo_unix.go:161 +0xc5 fp=0xc000534608 sp=0xc0005345c0 pc=0x60f1c5
net.cgoLookupIPCNAME(0x1109705, 0x3, 0xc0002103e0, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/usr/local/go/src/net/cgo_unix.go:161 +0x16b fp=0xc000534718 sp=0xc000534608 pc=0x60a8eb
net.cgoIPLookup(0xc000278480, 0x1109705, 0x3, 0xc0002103e0, 0x7)
/usr/local/go/src/net/cgo_unix.go:218 +0x67 fp=0xc0005347b8 sp=0xc000534718 pc=0x60b027
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1374 +0x1 fp=0xc0005347c0 sp=0xc0005347b8 pc=0x46d581
created by net.cgoLookupIP
/usr/local/go/src/net/cgo_unix.go:228 +0xc7
goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc0004a2310)
/usr/local/go/src/runtime/sema.go:56 +0x45
sync.(*WaitGroup).Wait(0xc0004a2308)
/usr/local/go/src/sync/waitgroup.go:130 +0x65
golang.org/x/sync/errgroup.(*Group).Wait(0xc0004a2300, 0xc000188200, 0x12558a0)
/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:40 +0x31
github.com/containerd/containerd/images.Dispatch(0x1255960, 0xc00008e4e0, 0x1233520, 0xc000460180, 0x0, 0xc000447670, 0x1, 0x1, 0x0, 0xc00044e000)
/go/pkg/mod/github.com/containerd/[email protected]/images/handlers.go:153 +0x1fd
github.com/containerd/containerd.(*Client).fetch(0xc0003b42a0, 0x1255960, 0xc00008e4e0, 0xc000092dd0, 0x7ffd42a80731, 0x36, 0x1, 0x0, 0x0, 0x0, ...)
/go/pkg/mod/github.com/containerd/[email protected]/pull.go:214 +0x6c5
github.com/containerd/containerd.(*Client).Pull(0xc0003b42a0, 0x1255960, 0xc00008e4e0, 0x7ffd42a80731, 0x36, 0xc0002dddb8, 0x7, 0x7, 0x0, 0x0, ...)
/go/pkg/mod/github.com/containerd/[email protected]/pull.go:92 +0x2e6
github.com/dragonflyoss/image-service/contrib/ctr-remote/commands.pull(0x1255960, 0xc00008e4e0, 0xc0003b42a0, 0x7ffd42a80731, 0x36, 0xc0002ddf98, 0x1255960, 0xc00008e4e0)
/nydus-rs/contrib/ctr-remote/commands/rpull.go:96 +0x3b2
github.com/dragonflyoss/image-service/contrib/ctr-remote/commands.glob..func1(0xc00013b8c0, 0x0, 0x0)
/nydus-rs/contrib/ctr-remote/commands/rpull.go:74 +0x1f7
github.com/urfave/cli.HandleAction(0xf9e9c0, 0x115da50, 0xc00013b8c0, 0xc00013b8c0, 0x0)
/go/pkg/mod/github.com/urfave/[email protected]/app.go:524 +0xfd
github.com/urfave/cli.Command.Run(0x110ad7c, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1149534, 0x3a, 0x0, ...)
/go/pkg/mod/github.com/urfave/[email protected]/command.go:173 +0x58e
github.com/urfave/cli.(*App).RunAsSubcommand(0xc000175dc0, 0xc00013b600, 0x0, 0x0)
/go/pkg/mod/github.com/urfave/[email protected]/app.go:405 +0x954
github.com/urfave/cli.Command.startApp(0x110be13, 0x6, 0x0, 0x0, 0x17e9940, 0x2, 0x2, 0x1114aff, 0xd, 0x0, ...)
/go/pkg/mod/github.com/urfave/[email protected]/command.go:372 +0x87f
github.com/urfave/cli.Command.Run(0x110be13, 0x6, 0x0, 0x0, 0x17e9940, 0x2, 0x2, 0x1114aff, 0xd, 0x0, ...)
/go/pkg/mod/github.com/urfave/[email protected]/command.go:102 +0x9f4
github.com/urfave/cli.(*App).Run(0xc000175c00, 0xc0000320a0, 0x5, 0x5, 0x0, 0x0)
/go/pkg/mod/github.com/urfave/[email protected]/app.go:277 +0x7e8
main.main()
/nydus-rs/contrib/ctr-remote/cmd/main.go:60 +0x805
goroutine 6 [select]:
google.golang.org/grpc.(*ccBalancerWrapper).watcher(0xc000295980)
/go/pkg/mod/google.golang.org/[email protected]/balancer_conn_wrappers.go:69 +0xc8
created by google.golang.org/grpc.newCCBalancerWrapper
/go/pkg/mod/google.golang.org/[email protected]/balancer_conn_wrappers.go:60 +0x172
goroutine 7 [chan receive]:
google.golang.org/grpc.(*addrConn).resetTransport(0xc0001e18c0)
/go/pkg/mod/google.golang.org/[email protected]/clientconn.go:1188 +0x769
created by google.golang.org/grpc.(*addrConn).connect
/go/pkg/mod/google.golang.org/[email protected]/clientconn.go:825 +0x12a
goroutine 9 [IO wait]:
internal/poll.runtime_pollWait(0x7f424e224f70, 0x72, 0x1233ea0)
/usr/local/go/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc00038a418, 0x72, 0x1233e00, 0x18119c0, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc00038a400, 0xc0003e4000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:159 +0x1a5
net.(*netFD).Read(0xc00038a400, 0xc0003e4000, 0x8000, 0x8000, 0x800010601, 0x0, 0x800010601)
/usr/local/go/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc0000106f8, 0xc0003e4000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:182 +0x8e
bufio.(*Reader).Read(0xc0003d2660, 0xc0003b43b8, 0x9, 0x9, 0xc0003e4009, 0xc000080800, 0x0)
/usr/local/go/src/bufio/bufio.go:227 +0x222
io.ReadAtLeast(0x12313c0, 0xc0003d2660, 0xc0003b43b8, 0x9, 0x9, 0x9, 0x459093, 0x3ad8108d462158, 0xc000541e48)
/usr/local/go/src/io/io.go:314 +0x87
io.ReadFull(...)
/usr/local/go/src/io/io.go:333
golang.org/x/net/http2.readFrameHeader(0xc0003b43b8, 0x9, 0x9, 0x12313c0, 0xc0003d2660, 0x0, 0xc072a86200000000, 0x4bbb4dec, 0x186c3c0)
/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x89
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0003b4380, 0xc0002c7fe0, 0xc0002c7fe0, 0x0, 0x0)
/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:492 +0xa5
google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0003fc000)
/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:1294 +0x179
created by google.golang.org/grpc/internal/transport.newHTTP2Client
/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:310 +0x1071
goroutine 10 [select]:
google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0002919a0, 0x1, 0x0, 0x0, 0x0, 0x0)
/go/pkg/mod/google.golang.org/[email protected]/internal/transport/controlbuf.go:395 +0x125
google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0003d2780, 0x0, 0x0)
/go/pkg/mod/google.golang.org/[email protected]/internal/transport/controlbuf.go:515 +0x1d3
google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0003fc000)
/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:356 +0x7b
created by google.golang.org/grpc/internal/transport.newHTTP2Client
/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:354 +0x123c
goroutine 23 [IO wait]:
internal/poll.runtime_pollWait(0x7f424e224e88, 0x72, 0x1233ea0)
/usr/local/go/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc0000ac218, 0x72, 0x1233e00, 0x18119c0, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc0000ac200, 0xc00017b300, 0x104d, 0x104d, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:159 +0x1a5
net.(*netFD).Read(0xc0000ac200, 0xc00017b300, 0x104d, 0x104d, 0x203000, 0x6488fb, 0xc0003dc860)
/usr/local/go/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc000010710, 0xc00017b300, 0x104d, 0x104d, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:182 +0x8e
crypto/tls.(*atLeastReader).Read(0xc000218020, 0xc00017b300, 0x104d, 0x104d, 0x1af, 0x1048, 0xc00005b668)
/usr/local/go/src/crypto/tls/conn.go:779 +0x62
bytes.(*Buffer).ReadFrom(0xc0003dc980, 0x1231540, 0xc000218020, 0x40d365, 0xfef2e0, 0x10dfc80)
/usr/local/go/src/bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc0003dc700, 0x1232a60, 0xc000010710, 0x5, 0xc000010710, 0x19e)
/usr/local/go/src/crypto/tls/conn.go:801 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc0003dc700, 0x0, 0x0, 0xc0004304e0)
/usr/local/go/src/crypto/tls/conn.go:608 +0x115
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:576
crypto/tls.(*Conn).Read(0xc0003dc700, 0xc0005de000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:1252 +0x15f
net/http.(*persistConn).Read(0xc0000a8fc0, 0xc0005de000, 0x1000, 0x1000, 0xc00005be60, 0x465fc0, 0xc00005be60)
/usr/local/go/src/net/http/transport.go:1894 +0x77
bufio.(*Reader).fill(0xc0000763c0)
/usr/local/go/src/bufio/bufio.go:101 +0x105
bufio.(*Reader).Peek(0xc0000763c0, 0x1, 0x2, 0x0, 0x0, 0x0, 0xc000430480)
/usr/local/go/src/bufio/bufio.go:139 +0x4f
net/http.(*persistConn).readLoop(0xc0000a8fc0)
/usr/local/go/src/net/http/transport.go:2047 +0x1a8
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1715 +0xcb7
goroutine 24 [select]:
net/http.(*persistConn).writeLoop(0xc0000a8fc0)
/usr/local/go/src/net/http/transport.go:2346 +0x11c
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1716 +0xcdc
goroutine 50 [select]:
net/http.(*Transport).getConn(0xc00046c000, 0xc0000748c0, 0x0, 0xc0001dc000, 0x5, 0xc0002103e0, 0xb, 0x0, 0x0, 0x0, ...)
/usr/local/go/src/net/http/transport.go:1350 +0x5ac
net/http.(*Transport).roundTrip(0xc00046c000, 0xc0001ba500, 0x30, 0x7f424f2623c0, 0x150)
/usr/local/go/src/net/http/transport.go:569 +0x7ce
net/http.(*Transport).RoundTrip(0xc00046c000, 0xc0001ba500, 0xc00046c000, 0x0, 0x0)
/usr/local/go/src/net/http/roundtrip.go:17 +0x35
net/http.send(0xc0001ba500, 0x1232b00, 0xc00046c000, 0x0, 0x0, 0x0, 0xc0000100d0, 0xc000076120, 0x1, 0x0)
/usr/local/go/src/net/http/client.go:252 +0x453
net/http.(*Client).send(0xc0003a3c80, 0xc0001ba500, 0x0, 0x0, 0x0, 0xc0000100d0, 0x0, 0x1, 0x22c)
/usr/local/go/src/net/http/client.go:176 +0xff
net/http.(*Client).do(0xc0003a3c80, 0xc0001ba500, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/client.go:718 +0x45f
net/http.(*Client).Do(...)
/usr/local/go/src/net/http/client.go:586
golang.org/x/net/context/ctxhttp.Do(0x1255960, 0xc0003a3920, 0xc0003a3c80, 0xc0001ba400, 0x0, 0x0, 0x1255960)
/go/pkg/mod/golang.org/x/[email protected]/context/ctxhttp/ctxhttp.go:27 +0x10f
github.com/containerd/containerd/remotes/docker.(*request).do(0xc000186990, 0x1255960, 0xc0001fca80, 0xa03589f80a80f997, 0x11093ae, 0xa00000c0000387b0)
/go/pkg/mod/github.com/containerd/[email protected]/remotes/docker/resolver.go:567 +0x57f
github.com/containerd/containerd/remotes/docker.(*request).doWithRetries(0xc000186990, 0x1255960, 0xc0001fca80, 0x0, 0x0, 0x0, 0x2f, 0x1215276, 0x0)
/go/pkg/mod/github.com/containerd/[email protected]/remotes/docker/resolver.go:576 +0x46
github.com/containerd/containerd/remotes/docker.dockerFetcher.open(0xc00046e000, 0x1255960, 0xc0001fca80, 0xc000186990, 0xc00044c000, 0x2a, 0x0, 0x0, 0x0, 0x0, ...)
/go/pkg/mod/github.com/containerd/[email protected]/remotes/docker/fetcher.go:165 +0x22e
github.com/containerd/containerd/remotes/docker.dockerFetcher.Fetch.func1(0x0, 0x67, 0x0, 0x0, 0x7f424f26b378)
/go/pkg/mod/github.com/containerd/[email protected]/remotes/docker/fetcher.go:109 +0xb0f
github.com/containerd/containerd/remotes/docker.(*httpReadSeeker).reader(0xc0004ba6c0, 0x0, 0x7f424c899ec8, 0xc000046e98, 0x0)
/go/pkg/mod/github.com/containerd/[email protected]/remotes/docker/httpreadseeker.go:147 +0x76
github.com/containerd/containerd/remotes/docker.(*httpReadSeeker).Read(0xc0004ba6c0, 0xc000600000, 0x100000, 0x100000, 0xc00020e480, 0x0, 0x0)
/go/pkg/mod/github.com/containerd/[email protected]/remotes/docker/httpreadseeker.go:53 +0x4f
io.ReadAtLeast(0x7f424f26b3c8, 0xc0004ba6c0, 0xc000600000, 0x100000, 0x100000, 0x100000, 0x18a4a28, 0x0, 0x0)
/usr/local/go/src/io/io.go:314 +0x87
github.com/containerd/containerd/content.copyWithBuffer(0x7f424c75f480, 0xc0004ba640, 0x7f424f26b3c8, 0xc0004ba6c0, 0x0, 0x0, 0x0)
/go/pkg/mod/github.com/containerd/[email protected]/content/helpers.go:262 +0x199
github.com/containerd/containerd/content.Copy(0x1255960, 0xc0004a2360, 0x125b3c0, 0xc0004ba640, 0x7f424f26b3c8, 0xc0004ba6c0, 0x41c, 0xc00044e000, 0x47, 0x0, ...)
/go/pkg/mod/github.com/containerd/[email protected]/content/helpers.go:147 +0x167
github.com/containerd/containerd/remotes.fetch(0x1255960, 0xc0004a2360, 0x7f424c827cd8, 0xc00009a620, 0x1233600, 0xc00046e000, 0xc00044c000, 0x2a, 0xc00044e000, 0x47, ...)
/go/pkg/mod/github.com/containerd/[email protected]/remotes/handlers.go:147 +0x6cd
github.com/containerd/containerd/remotes.FetchHandler.func1(0x12558a0, 0xc0000a2100, 0xc00044c000, 0x2a, 0xc00044e000, 0x47, 0x41c, 0x0, 0x0, 0x0, ...)
/go/pkg/mod/github.com/containerd/[email protected]/remotes/handlers.go:103 +0x378
github.com/containerd/containerd/images.HandlerFunc.Handle(0xc0004a2270, 0x12558a0, 0xc0000a2100, 0xc00044c000, 0x2a, 0xc00044e000, 0x47, 0x41c, 0x0, 0x0, ...)
/go/pkg/mod/github.com/containerd/[email protected]/images/handlers.go:55 +0x82
github.com/containerd/containerd/images.Handlers.func1(0x12558a0, 0xc0000a2100, 0xc00044c000, 0x2a, 0xc00044e000, 0x47, 0x41c, 0x0, 0x0, 0x0, ...)
/go/pkg/mod/github.com/containerd/[email protected]/images/handlers.go:65 +0xf8
github.com/containerd/containerd/images.HandlerFunc.Handle(0xc000218100, 0x12558a0, 0xc0000a2100, 0xc00044c000, 0x2a, 0xc00044e000, 0x47, 0x41c, 0x0, 0x0, ...)
/go/pkg/mod/github.com/containerd/[email protected]/images/handlers.go:55 +0x82
github.com/dragonflyoss/image-service/contrib/ctr-remote/commands.appendDefaultLabelsHandlerWrapper.func1.1(0x12558a0, 0xc0000a2100, 0xc00044c000, 0x2a, 0xc00044e000, 0x47, 0x41c, 0x0, 0x0, 0x0, ...)
/nydus-rs/contrib/ctr-remote/commands/rpull.go:114 +0x96
github.com/containerd/containerd/images.HandlerFunc.Handle(0xc0004a22a0, 0x12558a0, 0xc0000a2100, 0xc00044c000, 0x2a, 0xc00044e000, 0x47, 0x41c, 0x0, 0x0, ...)
/go/pkg/mod/github.com/containerd/[email protected]/images/handlers.go:55 +0x82
github.com/containerd/containerd.(*unpacker).handlerWrapper.func1.1(0x12558a0, 0xc0000a2100, 0xc00044c000, 0x2a, 0xc00044e000, 0x47, 0x41c, 0x0, 0x0, 0x0, ...)
/go/pkg/mod/github.com/containerd/[email protected]/unpacker.go:304 +0xf6
github.com/containerd/containerd/images.HandlerFunc.Handle(0xc000460180, 0x12558a0, 0xc0000a2100, 0xc00044c000, 0x2a, 0xc00044e000, 0x47, 0x41c, 0x0, 0x0, ...)
/go/pkg/mod/github.com/containerd/[email protected]/images/handlers.go:55 +0x82
github.com/containerd/containerd/images.Dispatch.func1(0x0, 0x0)
/go/pkg/mod/github.com/containerd/[email protected]/images/handlers.go:134 +0xd5
golang.org/x/sync/errgroup.(*Group).Go.func1(0xc0004a2300, 0xc000188200)
/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x59
created by golang.org/x/sync/errgroup.(*Group).Go
/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:54 +0x66
goroutine 51 [select]:
google.golang.org/grpc.newClientStream.func5(0xc0003dc380, 0xc000124900, 0x1255960, 0xc0004a2480)
/go/pkg/mod/google.golang.org/[email protected]/stream.go:318 +0xda
created by google.golang.org/grpc.newClientStream
/go/pkg/mod/google.golang.org/[email protected]/stream.go:317 +0xbd0
goroutine 14 [select]:
net.(*Resolver).lookupIPAddr(0x186b4c0, 0x1255920, 0xc0002783c0, 0x1109705, 0x3, 0xc0002103e0, 0x7, 0x1bb, 0x0, 0x0, ...)
/usr/local/go/src/net/lookup.go:299 +0x685
net.(*Resolver).internetAddrList(0x186b4c0, 0x1255920, 0xc0002783c0, 0x1109705, 0x3, 0xc0002103e0, 0xb, 0x0, 0x0, 0x0, ...)
/usr/local/go/src/net/ipsock.go:280 +0x4d4
net.(*Resolver).resolveAddrList(0x186b4c0, 0x1255920, 0xc0002783c0, 0x1109c57, 0x4, 0x1109705, 0x3, 0xc0002103e0, 0xb, 0x0, ...)
/usr/local/go/src/net/dial.go:221 +0x47d
net.(*Dialer).DialContext(0xc0004600c0, 0x1255960, 0xc0003a3920, 0x1109705, 0x3, 0xc0002103e0, 0xb, 0x0, 0x0, 0x0, ...)
/usr/local/go/src/net/dial.go:403 +0x22b
net/http.(*Transport).dial(0xc00046c000, 0x1255960, 0xc0003a3920, 0x1109705, 0x3, 0xc0002103e0, 0xb, 0x0, 0x1, 0xffffffffffffffff, ...)
/usr/local/go/src/net/http/transport.go:1144 +0x1fd
net/http.(*Transport).dialConn(0xc00046c000, 0x1255960, 0xc0003a3920, 0x0, 0xc0001dc000, 0x5, 0xc0002103e0, 0xb, 0x0, 0xc0001e3560, ...)
/usr/local/go/src/net/http/transport.go:1582 +0x1adb
net/http.(*Transport).dialConnFor(0xc00046c000, 0xc0000da370)
/usr/local/go/src/net/http/transport.go:1424 +0xc6
created by net/http.(*Transport).queueForDial
/usr/local/go/src/net/http/transport.go:1393 +0x40f
goroutine 15 [select]:
net.cgoLookupIP(0x12558a0, 0xc000074940, 0x1109705, 0x3, 0xc0002103e0, 0x7, 0x0, 0x0, 0x0, 0x0, ...)
/usr/local/go/src/net/cgo_unix.go:229 +0x199
net.(*Resolver).lookupIP(0x186b4c0, 0x12558a0, 0xc000074940, 0x1109705, 0x3, 0xc0002103e0, 0x7, 0x0, 0x0, 0x0, ...)
/usr/local/go/src/net/lookup_unix.go:96 +0x187
net.glob..func1(0x12558a0, 0xc000074940, 0xc00023e7b0, 0x1109705, 0x3, 0xc0002103e0, 0x7, 0xc0003a3920, 0x0, 0xc0001dc000, ...)
/usr/local/go/src/net/hook.go:23 +0x72
net.(*Resolver).lookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/lookup.go:293 +0xb9
internal/singleflight.(*Group).doCall(0x186b4d0, 0xc00052e230, 0xc0002103f0, 0xb, 0xc000074980)
/usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
created by internal/singleflight.(*Group).DoChan
/usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
My ultimate goal is to be able to run the image service on Kubernetes, but first I'm having trouble understanding where nydusd
and containerd-nydus-grpc
fit into the bigger picture.
Where is the nydusd
daemon used vs the containerd-nydus-grpc
daemon? If I had a nydus image stored in a registry, and I wanted to use the image service to pull and run this image on kubernetes (lets assume GKE with containerd optimized os on the nodes) which daemon would I use? Lets also assume these images could be scheduled on multiple nodes -- which daemon would be used in the daemonset?
In general, any insight onto how to get this up and running on Kubernetes would be much appreciated.
我看了一些蚂蚁的关于Nydus博客,其中容器镜像按需下载,用户不再需要下载完整镜像就能启动容器。 这句话应该怎么理解? 比如我有个基于jdk的镜像,jdk是一层,jar包是一层,会在下载完成jdk那一层后就把容器run起来了吗?
when I test containerd with nydus-snapshotter as the doc of this repository below:
https://github.com/dragonflyoss/image-service/blob/master/docs/containerd-env-setup.md
I have convert a nydus style image using:
nydusify convert --nydus-image /usr/local/bin/nydus-image --source docker.io/library/ubuntu:16.04 --target localhost:5000/ubuntu:16.04-nydus
and then the image was pushed into my local registry, I can find it:
>> curl "http://localhost:5000/v2/ubuntu/tags/list"
{"name":"ubuntu","tags":["16.04-nydus"]}
But, after I started nydus-snapshotter
and containerd
, I want to use crictl
test the whole process:
crictl pull localhost:5000/ubuntu:16.04-nydus
crictl
error happend, message:
FATA[0000] pulling image: rpc error: code = Unknown desc = failed to pull and unpack image "localhost:5000/ubuntu:16.04-nydus": failed to prepare extraction snapshot "extract-262457039-f1jQ sha256:6df557218c153a61f8d1901eb213a80413b18a6f789c9ed0b5bef6d3ec9e3679": failed to find image ref of snapshot 25, labels map[containerd.io/snapshot.ref:sha256:6df557218c153a61f8d1901eb213a80413b18a6f789c9ed0b5bef6d3ec9e3679 containerd.io/snapshot/nydus-bootstrap:true]: unknown
the nydus-snapshotter
error message:
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/25","level":"info","msg":"cleanupSnapshotDirectory /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/25","time":"2021-01-21T16:53:35.285184694+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/25","error":"failed to get daemon by snapshotID (25)","level":"error","msg":"failed to unmount","time":"2021-01-21T16:53:35.285204313+08:00"}
{"level":"info","msg":"umount nydus daemon of id 25, mountpoint /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/25","time":"2021-01-21T16:53:35.285217642+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/25","error":"failed to get daemon by snapshotID (25)","level":"error","msg":"failed to unmount","time":"2021-01-21T16:53:35.285231057+08:00"}
........
nydus-snapshotter
settings:
containerd-nydus-grpc --nydusd-path /usr/local/bin/nydusd --config-path /etc/nydusd-config.json --shared-daemon --log-level debug --root /var/lib/containerd/io.containerd.snapshotter.v1.nydus --address /run/containerd/containerd-nydus-grpc.sock --nydusimg-path /usr/local/bin/nydus-image
how to understand the problem and solve it?
As mentioned in PR #243
Discussed with @luodw offline, so the issue is that when we create more containers with the same image in non-shared daemon mode, we would create extra directories in WithSocketDir and WithConfigDir etc., which is a side effect of failing fs.manager.NewDaemon(d). Defly a bug and need to be fixed.
when I clone the code and try to compile it according to the docs, failure occurs with error "u64::MAX associated item not found in u64
". Detalis as bellow:
[ image-service]# make
# TODO: switch to --out-dir when it moves to stable
# For now we build with separate target directories
cargo build --features=virtiofsd --target-dir target-virtiofsd
Compiling nydus-rs v0.1.0 (/home/RustProjects/image-service)
error[E0599]: no associated item named `MAX` found for type `u64` in the current scope
--> src/bin/nydus-image/node.rs:130:23
|
130 | dev: u64::MAX,
| ^^^ associated item not found in `u64`
|
help: you are looking for the module in `std`, not the primitive type
|
130 | dev: std::u64::MAX,
| ^^^^^^^^^^^^^
error[E0599]: no associated item named `MAX` found for type `u64` in the current scope
--> src/bin/nydus-image/tree.rs:113:23
|
113 | dev: u64::MAX,
| ^^^ associated item not found in `u64`
|
help: you are looking for the module in `std`, not the primitive type
|
113 | dev: std::u64::MAX,
| ^^^^^^^^^^^^^
error[E0599]: no associated item named `MAX` found for type `u64` in the current scope
--> src/bin/nydus-image/tree.rs:333:23
|
333 | dev: u64::MAX,
| ^^^ associated item not found in `u64`
|
help: you are looking for the module in `std`, not the primitive type
|
333 | dev: std::u64::MAX,
| ^^^^^^^^^^^^^
error: aborting due to 3 previous errors
For more information about this error, try `rustc --explain E0599`.
error: could not compile `nydus-rs`.
To learn more, run the command again with --verbose.
make: *** [Makefile:12: build-virtiofsd] Error 101
Other information:
[ image-service]# git log
commit 63841eedc70953813c45d7fd4220193844780667 (HEAD -> master, origin/master, origin/HEAD)
Merge: b1eec3c 3b963ce
...
[ image-service]# cargo version
cargo 1.42.0 (86334295e 2020-01-31)
[ image-service]# rustc --version
rustc 1.42.0 (b8cedc004 2020-03-09)
Solutioin:
if I use std::u64::MAX instead of u64::MAX, everything will be OK.
However, when I try it again with rustc-1.47.0
, it will work fine, even if I dont add std::
to u64::MAX
.
So, I sugguest that it's better using the full path std::u64::MAX
.
**1. the question: nydusd No such device **
mkdir -p /opt/nydusd && cd /opt/nydusd && nydusify convert --nydus-image /usr/bin/nydus-image --source 192.168.1.130:8099/test/nginx:alpine --target 192.168.1.130:8099/test/nginx-nydus --source-insecure --target-insecure
......
INFO[2021-08-21T18:42:45+08:00] [BLOB] Push blob Digest="sha256:1066684afa53a6c811caa6b22ae6c12431dd36ad8b12d98e214c935b6bb6a633" Size="1.9 kB" Time=29.101858ms
INFO[2021-08-21T18:42:45+08:00] [MANI] Push manifest
INFO[2021-08-21T18:42:45+08:00] [MANI] Push manifest Time=82.089513ms
INFO[2021-08-21T18:42:45+08:00] Converted to 192.168.1.130:8099/test/nginx-nydus
tree tmp -L 4
tmp
├── blobs
│ ├── 1066684afa53a6c811caa6b22ae6c12431dd36ad8b12d98e214c935b6bb6a633
│ ├── 4ee84c5b440641474cde5ebc7a1a146c377de72b71c43f8a4f57c5eea031296c
│ ├── 8c735009543a508adfc6e7986663d1f11343f2500ac89d3a32885aca8cd28ec0
│ ├── c440fd17d5dcda79d5cc45342b41e77c35a58e1ad2226998d06213acfa465d25
│ ├── ebc3c76be347ad99857f9b762d5ce3a320ba77a2441ce5f63b37110a3157a69d
│ └── faacfd1af225f32c5d748e2b4e4752a3e495e03d4937e3f27f659a298e98dce3
├── bootstraps
│ ├── 1-sha256:bc276c40b172b1c5467277d36db5308a203a48262d5f278766cf083947d05466
│ ├── 2-sha256:9fda960ad36a1e76f356412120e3d7d03b607d3d9962041c7269015af4b6a336
│ ├── 3-sha256:7336cfe0ea04876960d7ab5b6d16b8167185a00b4ac9e3f3ecdffad27d27b242
│ ├── 4-sha256:ad5643e92f173af5a47a05624dcc44e934ca174390bbb480ad66f902b5cd6443
│ ├── 5-sha256:9929dc79f45b709f6fdbf428c4f8ccd3b19e83543542cb9a5cdea8b3aa53766d
│ └── 6-sha256:c3554b2d61e3c1cffcaba4b4fa7651c644a3354efaafa2f22cb53542f6c600dc
├── output.json
└── source
**vi ./registry.json **
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"host": "192.168.1.130:8099",
"repo": "/test/nginx:alpine"
}
},
"digest_validate": false
},
"mode": "direct"
}
nydusd --config ./registry.json --mountpoint ./mnt --bootstrap ./tmp/bootstraps/6-sha256:c3554b2d61e3c1cffcaba4b4fa7651c644a3354efaafa2f22cb53542f6c600dc --log-level info
[2021-08-21 19:08:12.111874 +08:00] INFO [utils/src/lib.rs:75] Git Commit: "5caf58755f5161e291230024343506584c8b4823", Build Time: "Mon, 22 Mar 2021 10:07:20 +0000", Profile: "release", Rustc Version: "rustc 1.49.0 (e1884a8e3 2020-12-29)"
[2021-08-21 19:08:12.112094 +08:00] INFO [rafs/src/metadata/mod.rs:235] rafs superblock features: COMPRESS_LZ4_BLOCK | DIGESTER_BLAKE3 | EXPLICIT_UID_GID
[2021-08-21 19:08:12.112163 +08:00] INFO [rafs/src/storage/backend/request.rs:147] backend config: CommonConfig { proxy: ProxyConfig { url: "", ping_url: "", fallback: true, check_interval: 5 }, timeout: 5, connect_timeout: 5, force_upload: false, retry_limit: 0 }
[2021-08-21 19:08:12.120556 +08:00] INFO [src/bin/nydusd/daemon.rs:410] Rafs imported
[2021-08-21 19:08:12.120634 +08:00] INFO [src/bin/nydusd/daemon.rs:329] rafs mounted at /
Error: Os { code: 19, kind: Other, message: "No such device" }
2. configuration information
vi /etc/containerd/config.toml
version = 2
root = "/var/lib/containerd"
state = "/run/containerd"
[proxy_plugins]
[proxy_plugins.nydus]
type = "snapshot"
address = "/run/containerd/containerd-nydus-grpc.sock"
......
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2"
systemd_cgroup = true
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "nydus"
disable_snapshot_annotations = false
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runtime.v1.linux"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
......
vi /etc/nydusd-config.json
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"host": "192.168.1.130:8099",
"auth": "YWRtaW46SGFyYm9yMTIzNDU=",
"timeout": 5,
"connect_timeout": 5,
"retry_limit": 0
}
},
"cache": {
"type": "blobcache",
"compressed": true,
"config": {
"work_dir": "cache"
}
}
},
"mode": "direct",
"digest_validate": false,
"iostats_files": true,
"enable_xattr": false,
"fs_prefetch": {
"enable": true,
"threads_count": 10,
"bandwidth_rate": 1048576
}
}
3. version information
runc -v
runc version 1.0.0
commit: v1.0.0-0-g84113eef
spec: 1.0.2-dev
go: go1.15.14
libseccomp: 2.3.1
containerd -v
containerd github.com/containerd/containerd v1.4.8 7eba5930496d9bbe375fdf71603e610ad737d2b2
4. service status
systemctl status containerd -l
● containerd.service - containerd container runtime
Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2021-08-21 19:07:12 CST; 1min 6s ago
Docs: https://containerd.io
Process: 704509 ExecStartPre=/sbin/modprobe overlay (code=exited, status=1/FAILURE)
Main PID: 704513 (containerd)
Tasks: 13
Memory: 20.4M
CGroup: /system.slice/containerd.service
└─704513 /usr/local/bin/containerd
Aug 21 19:07:12 k8s-master containerd[704513]: time="2021-08-21T19:07:12.989712145+08:00" level=info msg=serving... address=/run/containerd/containerd.sock
Aug 21 19:07:12 k8s-master systemd[1]: Started containerd container runtime.
Aug 21 19:07:12 k8s-master containerd[704513]: time="2021-08-21T19:07:12.989779318+08:00" level=info msg="containerd successfully booted in 0.051927s"
Aug 21 19:07:12 k8s-master containerd[704513]: time="2021-08-21T19:07:12.991490838+08:00" level=warning msg="The image registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 is not unpacked."
Aug 21 19:07:12 k8s-master containerd[704513]: time="2021-08-21T19:07:12.995067974+08:00" level=warning msg="The image registry.cn-hangzhou.aliyuncs.com/google_containers/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f is not unpacked."
Aug 21 19:07:12 k8s-master containerd[704513]: time="2021-08-21T19:07:12.998616376+08:00" level=warning msg="The image sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c is not unpacked."
Aug 21 19:07:12 k8s-master containerd[704513]: time="2021-08-21T19:07:12.999379005+08:00" level=info msg="Start event monitor"
Aug 21 19:07:12 k8s-master containerd[704513]: time="2021-08-21T19:07:12.999403448+08:00" level=info msg="Start snapshots syncer"
Aug 21 19:07:12 k8s-master containerd[704513]: time="2021-08-21T19:07:12.999412718+08:00" level=info msg="Start cni network conf syncer"
Aug 21 19:07:12 k8s-master containerd[704513]: time="2021-08-21T19:07:12.999416308+08:00" level=info msg="Start streaming server
5. snapshotter containerd-nydus-grpc
containerd-nydus-grpc --nydusd-path /usr/bin/nydusd --config-path /etc/nydusd-config.json --log-level trace --root /var/lib/containerd/io.containerd.snapshotter.v1.nydus --address /run/containerd/containerd-nydus-grpc.sock
{"level":"info","msg":"found 0 daemons running","time":"2021-08-21T19:06:39.850981412+08:00"}
compiled nydusify on mac with the following command:
cd contrib/nydusify
make build-release
cd cmd
chmod +x nydusify
./nydusify convert --help
i try to convert an image by nydusify , env
uname -a
Linux k8s-10-21-17-152 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 25 17:23:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
image-server commit : a17f7a5
nydusify convert cmd
nydusify convert --source-auth <my-auth> --target-auth <my-auth> --source docker.io/ppeijie/demo-http:0fcedfe --target docker.io/ppeijie/demo-http:0fcedfe-nydus --nydus-image /bin/nydus-image --containerd-sock /var/run/docker/containerd/containerd.sock --work-dir /root/ppj/tmp
full log :
nydusify convert --source-auth <my-auth> --target-auth <my-auth> --source docker.io/ppeijie/demo-http:0fcedfe --target docker.io/ppeijie/demo-http:0fcedfe-nydus --nydus-image /bin/nydus-image --containerd-sock /var/run/docker/containerd/containerd.sock --work-dir /root/ppj/tmp
INFO[2020-11-03T10:48:15+08:00] Pulling image docker.io/ppeijie/demo-http:0fcedfe with platform linux/amd64
INFO[2020-11-03T10:48:17+08:00] Unpacking layer sha256:9a598663ae731cd95b7cc07391f4130f30076a4ede6bc52fe07375eca89f65bd
INFO[2020-11-03T10:48:24+08:00] Building layer sha256:9a598663ae731cd95b7cc07391f4130f30076a4ede6bc52fe07375eca89f65bd
2020-11-03T10:48:26+08:00 - INFO - rafs superblock features: COMPRESS_LZ4_BLOCK DIGESTER_BLAKE3 EXPLICIT_UID_GID HAS_XATTR
2020-11-03T10:48:26+08:00 - INFO - build finished, blob id: ["672fb081918d7ba2d0f43e86efd85b8d82c0e5aa606af989c9af83a7531aaee1"], blob file: "BRPH07"
INFO[2020-11-03T10:48:26+08:00] Unpacking layer sha256:9ab1c148438f20fdf82abda82959261867c3c368b63695b75a8744d9d0664861
INFO[2020-11-03T10:48:26+08:00] Pushing blob layer sha256:672fb081918d7ba2d0f43e86efd85b8d82c0e5aa606af989c9af83a7531aaee1
INFO[2020-11-03T10:48:26+08:00] Building layer sha256:9ab1c148438f20fdf82abda82959261867c3c368b63695b75a8744d9d0664861
2020-11-03T10:48:26+08:00 - INFO - rafs superblock features: COMPRESS_LZ4_BLOCK DIGESTER_BLAKE3 EXPLICIT_UID_GID HAS_XATTR
2020-11-03T10:48:26+08:00 - INFO - rafs superblock features: COMPRESS_LZ4_BLOCK DIGESTER_BLAKE3 EXPLICIT_UID_GID HAS_XATTR
2020-11-03T10:48:26+08:00 - INFO - build finished, blob id: ["672fb081918d7ba2d0f43e86efd85b8d82c0e5aa606af989c9af83a7531aaee1", "a3249bd89e48fea052d256d8f8380e31357bd40df6d5bac44b0f4dfd0f190e7a"], blob file: "UqgEZn"
INFO[2020-11-03T10:48:26+08:00] Unpacking layer sha256:3894911508a114371b304aa955cda2975d339aa79510a771016fe620d33c522e
INFO[2020-11-03T10:48:26+08:00] Pushing blob layer sha256:a3249bd89e48fea052d256d8f8380e31357bd40df6d5bac44b0f4dfd0f190e7a
INFO[2020-11-03T10:48:26+08:00] Building layer sha256:3894911508a114371b304aa955cda2975d339aa79510a771016fe620d33c522e
2020-11-03T10:48:26+08:00 - INFO - rafs superblock features: COMPRESS_LZ4_BLOCK DIGESTER_BLAKE3 EXPLICIT_UID_GID HAS_XATTR
2020-11-03T10:48:27+08:00 - INFO - rafs superblock features: COMPRESS_LZ4_BLOCK DIGESTER_BLAKE3 EXPLICIT_UID_GID HAS_XATTR
2020-11-03T10:48:27+08:00 - INFO - build finished, blob id: ["672fb081918d7ba2d0f43e86efd85b8d82c0e5aa606af989c9af83a7531aaee1", "a3249bd89e48fea052d256d8f8380e31357bd40df6d5bac44b0f4dfd0f190e7a", "3c84a6e5c8f867359256749e4a6c022be28bba91f941330605d37a3629fd9945"], bl
ob file: "iejYyu"
INFO[2020-11-03T10:48:27+08:00] Unpacking layer sha256:cc2f2a15de93e4cad65e5aa1a5fb6b5aeaf36838beb237c97fdfe79c42165869
INFO[2020-11-03T10:48:27+08:00] Pushing blob layer sha256:3c84a6e5c8f867359256749e4a6c022be28bba91f941330605d37a3629fd9945
INFO[2020-11-03T10:48:32+08:00] Building layer sha256:cc2f2a15de93e4cad65e5aa1a5fb6b5aeaf36838beb237c97fdfe79c42165869
2020-11-03T10:48:32+08:00 - INFO - rafs superblock features: COMPRESS_LZ4_BLOCK DIGESTER_BLAKE3 EXPLICIT_UID_GID HAS_XATTR
2020-11-03T10:48:34+08:00 - INFO - rafs superblock features: COMPRESS_LZ4_BLOCK DIGESTER_BLAKE3 EXPLICIT_UID_GID HAS_XATTR
2020-11-03T10:48:35+08:00 - INFO - build finished, blob id: ["672fb081918d7ba2d0f43e86efd85b8d82c0e5aa606af989c9af83a7531aaee1", "a3249bd89e48fea052d256d8f8380e31357bd40df6d5bac44b0f4dfd0f190e7a", "3c84a6e5c8f867359256749e4a6c022be28bba91f941330605d37a3629fd9945", "ef
0c8bfc53e2ade9a616040fdb1aef9b4e950857e1020fab20a282e2f0ac4332"], blob file: "HjWIqB"
INFO[2020-11-03T10:48:35+08:00] Unpacking layer sha256:57190868ac9b388928692bf16086aba4255639f3bb3cc9ce4738e3a189ea9bdd
INFO[2020-11-03T10:48:35+08:00] Pushing blob layer sha256:ef0c8bfc53e2ade9a616040fdb1aef9b4e950857e1020fab20a282e2f0ac4332
INFO[2020-11-03T10:48:40+08:00] Building layer sha256:57190868ac9b388928692bf16086aba4255639f3bb3cc9ce4738e3a189ea9bdd
2020-11-03T10:48:40+08:00 - INFO - rafs superblock features: COMPRESS_LZ4_BLOCK DIGESTER_BLAKE3 EXPLICIT_UID_GID HAS_XATTR
2020-11-03T10:48:40+08:00 - ERROR - Error:
"inode validation failure, inode OndiskInode {\n i_digest: RafsDigest {\n data: [\n 213,\n 6,\n 245,\n 217,\n 132,\n 190,\n 40,\n 201,\n 82,\n
141,\n 220,\n 66,\n 229,\n 88,\n 169,\n 157,\n 219,\n 61,\n 210,\n 184,\n 144,\n 61,\n 80,\n 96,\n 1
41,\n 135,\n 116,\n 133,\n 56,\n 164,\n 146,\n 19,\n ],\n },\n i_parent: 4294967299,\n i_ino: 3113851290066,\n i_uid: 536883,\n i_gid: 0,\n i_projid: 647011,\n i_m
ode: 0,\n i_size: 0,\n i_blocks: 0,\n i_flags: XATTR | HAS_HOLE | 0x0x8b58e2fd35877080,\n i_nlink: 3666274279,\n i_child_index: 1805769608,\n i_child_count: 510099711,\n i_name_size: 15729,\n i_symlink_size: 34190,\n i_reserved: [\n 3
0,\n 13,\n 171,\n 132,\n 136,\n 174,\n 245,\n 44,\n 6,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 102,\n 0,\n 0,\n 0,\n 0,\n 0,
\n 0,\n 0,\n ],\n}"
at rafs/src/metadata/direct.rs:482
note: enable `RUST_BACKTRACE=1` env to display a backtrace
Error: Os { code: 9, kind: Other, message: "Bad file descriptor" }
FATA[2020-11-03T10:48:40+08:00] unpack source image layer: build layer sha256:57190868ac9b388928692bf16086aba4255639f3bb3cc9ce4738e3a189ea9bdd: exit status 1
tree info
tree tmp/ -L 5
tmp/
|-- docker.io
| `-- ppeijie
| |-- demo-http:0fcedfe
| | |-- sha256:3894911508a114371b304aa955cda2975d339aa79510a771016fe620d33c522e
| | | `-- usr
| | |-- sha256:57190868ac9b388928692bf16086aba4255639f3bb3cc9ce4738e3a189ea9bdd
| | | |-- app
| | | `-- jdk8
| | |-- sha256:9a598663ae731cd95b7cc07391f4130f30076a4ede6bc52fe07375eca89f65bd
| | | |-- anaconda-post.log
| | | |-- bin -> usr/bin
| | | |-- dev
| | | |-- etc
| | | |-- home
| | | |-- lib -> usr/lib
| | | |-- lib64 -> usr/lib64
| | | |-- lost+found
| | | |-- media
| | | |-- mnt
| | | |-- opt
| | | |-- proc
| | | |-- root
| | | |-- run
| | | |-- sbin -> usr/sbin
| | | |-- srv
| | | |-- sys
| | | |-- tmp
| | | |-- usr
| | | `-- var
| | |-- sha256:9ab1c148438f20fdf82abda82959261867c3c368b63695b75a8744d9d0664861
| | | `-- dockerfile
| | `-- sha256:cc2f2a15de93e4cad65e5aa1a5fb6b5aeaf36838beb237c97fdfe79c42165869
| | |-- etc
| | |-- run
| | |-- usr
| | `-- var
| `-- demo-http:0fcedfe-nydus
| |-- blobs
| | |-- 3c84a6e5c8f867359256749e4a6c022be28bba91f941330605d37a3629fd9945
| | |-- 672fb081918d7ba2d0f43e86efd85b8d82c0e5aa606af989c9af83a7531aaee1
| | |-- a3249bd89e48fea052d256d8f8380e31357bd40df6d5bac44b0f4dfd0f190e7a
| | `-- ef0c8bfc53e2ade9a616040fdb1aef9b4e950857e1020fab20a282e2f0ac4332
| |-- bootstrap
| `-- bootstrap-parent
`-- ppeijie
|-- demo-http:0fcedfe
`-- demo-http:0fcedfe-nydus
`-- blobs
convert layer sha256:57190868ac9b388928692bf16086aba4255639f3bb3cc9ce4738e3a189ea9bdd
, catch error info "Bad file descriptor"
wget https://github.com/dragonflyoss/image-service/archive/refs/heads/master.zip
make && make release && make docker-static
......
Compiling vhost v0.1.0 (https://github.com/cloud-hypervisor/vhost.git?branch=dragonball#42296415)
Compiling micro_http v0.2.0 (https://hub.fastgit.org/cloud-hypervisor/micro-http.git?branch=master#f4405cb5)
Compiling event-manager v0.2.0 (https://hub.fastgit.org/rust-vmm/event-manager.git?tag=v0.2.0#abac9289)
Compiling vhost v0.1.0 (https://hub.fastgit.org/cloud-hypervisor/vhost.git?branch=dragonball#42296415)
Compiling vm-virtio v0.1.0 (https://github.com/cloud-hypervisor/vm-virtio.git?branch=dragonball#cd0f7204)
Compiling rand_chacha v0.3.1
error[E0061]: this function takes 2 arguments but 3 arguments were supplied
--> /root/.cargo/git/checkouts/event-manager-e0a7f20e8f34fb60/abac928/src/epoll.rs:37:44
|
37 | let event_count = match self.epoll.wait(
| ^^^^ expected 2 arguments
38 | self.ready_events.capacity(),
| ----------------------------
39 | milliseconds,
| ------------
40 | &mut self.ready_events[..],
| -------------------------- supplied 3 arguments
error: aborting due to previous error
For more information about this error, try rustc --explain E0061
.
error: could not compile event-manager
To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed
make: *** [build-virtiofs] Error 101
rustc -V
rustc 1.50.0 (cb75ad5db 2021-02-10)
cargo -V
cargo 1.50.0 (f04e7fab7 2021-02-04)
or integrated with harbor?
nydusctl --sock $s info
cause nydusd-virtiofs
crashed:
Jan 29 16:27:09 kant-server kata[30906]: time="2022-01-29T16:27:09.824797412+08:00" level=info msg="[2022-01-29 16:27:09.824650 +08:00] INFO [api/src/http.rs:97] <--- Get Uri { string: "/api/v1/daemon" }" name=containerd-shim-v2 pid=30906 sandbox=ae63a1205ada600b6fab254b19fc3cf277ec6da15e37877cfeba23ff69758509 source=virtcontainers/hypervisor subsystem=nydusd
Jan 29 16:27:09 kant-server kata[30906]: time="2022-01-29T16:27:09.826984021+08:00" level=info msg="thread 'main' panicked at 'not implemented', src/bin/nydusd/virtiofs.rs:311:9" name=containerd-shim-v2 pid=30906 sandbox=ae63a1205ada600b6fab254b19fc3cf277ec6da15e37877cfeba23ff69758509 source=virtcontainers/hypervisor subsystem=nydusd
Jan 29 16:27:09 kant-server containerd[27056]: time="2022-01-29T16:27:09.826984021+08:00" level=info msg="thread 'main' panicked at 'not implemented', src/bin/nydusd/virtiofs.rs:311:9" name=containerd-shim-v2 pid=30906 sandbox=ae63a1205ada600b6fab254b19fc3cf277ec6da15e37877cfeba23ff69758509 source=virtcontainers/hypervisor subsystem=nydusd
Jan 29 16:27:09 kant-server containerd[27056]: time="2022-01-29T16:27:09.827125911+08:00" level=info msg="note: run withRUST_BACKTRACE=1
environment variable to display a backtrace" name=containerd-shim-v2 pid=30906 sandbox=ae63a1205ada600b6fab254b19fc3cf277ec6da15e37877cfeba23ff69758509 source=virtcontainers/hypervisor subsystem=nydusd
Jan 29 16:27:09 kant-server kata[30906]: time="2022-01-29T16:27:09.827125911+08:00" level=info msg="note: run withRUST_BACKTRACE=1
environment variable to display a backtrace" name=containerd-shim-v2 pid=30906 sandbox=ae63a1205ada600b6fab254b19fc3cf277ec6da15e37877cfeba23ff69758509 source=virtcontainers/hypervisor subsystem=nydusd
For these APIs, returning HTTP status code 501 will be better than crashing the daemon process.
Or just implement them.
1. the question: crictl runp container-config.yaml pod-config.yaml error
**FATA[0000] run pod sandbox: rpc error: code = NotFound desc = failed to create containerd container: failed to create snapshot: missing parent "k8s.io/2/sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770" bucket: not found **
2. configuration information
vi /etc/containerd/config.toml
version = 2
root = "/var/lib/containerd"
state = "/run/containerd"
[proxy_plugins]
[proxy_plugins.nydus]
type = "snapshot"
address = "/run/containerd/containerd-nydus-grpc.sock"
......
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2"
systemd_cgroup = true
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "nydus"
disable_snapshot_annotations = false
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runtime.v1.linux"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
......
vi /etc/nydusd-config.json
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"host": "192.168.1.130:8099",
"auth": "YWRtaW46SGFyYm9yMTIzNDU=",
"timeout": 5,
"connect_timeout": 5,
"retry_limit": 0
}
},
"cache": {
"type": "blobcache",
"compressed": true,
"config": {
"work_dir": "cache"
}
}
},
"mode": "direct",
"digest_validate": false,
"iostats_files": true,
"enable_xattr": false,
"fs_prefetch": {
"enable": true,
"threads_count": 10,
"bandwidth_rate": 1048576
}
}
3. version information
runc -v
runc version 1.0.0
commit: v1.0.0-0-g84113eef
spec: 1.0.2-dev
go: go1.15.14
libseccomp: 2.3.1
containerd -v
containerd github.com/containerd/containerd v1.4.8 7eba5930496d9bbe375fdf71603e610ad737d2b2
4. service status
systemctl status containerd -l
● containerd.service - containerd container runtime
Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2021-08-21 15:37:40 CST; 21s ago
Docs: https://containerd.io
Process: 466524 ExecStartPre=/sbin/modprobe overlay (code=exited, status=1/FAILURE)
Main PID: 466528 (containerd)
Tasks: 14
Memory: 21.8M
CGroup: /system.slice/containerd.service
└─466528 /usr/local/bin/containerd
Aug 21 15:37:40 k8s-master containerd[466528]: time="2021-08-21T15:37:40.810949181+08:00" level=warning msg="The image sha256:495471c7203f56f5444fb79029e0b5cd72709d3533b136b17044d7dcd86b3fef is not unpacked."
Aug 21 15:37:40 k8s-master containerd[466528]: time="2021-08-21T15:37:40.811664147+08:00" level=warning msg="The image sha256:7ce0143dee376bfd2937b499a46fb110bda3c629c195b84b1cf6e19be1a9e23b is not unpacked."
Aug 21 15:37:40 k8s-master containerd[466528]: time="2021-08-21T15:37:40.814468869+08:00" level=warning msg="The image sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c is not unpacked."
Aug 21 15:37:40 k8s-master containerd[466528]: time="2021-08-21T15:37:40.815782268+08:00" level=warning msg="The image sha256:911b9682e03b53bd2bf63bb212163dc9edc2fcab7999028936f74672ae0740fb is not unpacked."
Aug 21 15:37:40 k8s-master containerd[466528]: time="2021-08-21T15:37:40.816301301+08:00" level=info msg="Start event monitor"
Aug 21 15:37:40 k8s-master containerd[466528]: time="2021-08-21T15:37:40.816334106+08:00" level=info msg="Start snapshots syncer"
Aug 21 15:37:40 k8s-master containerd[466528]: time="2021-08-21T15:37:40.816344412+08:00" level=info msg="Start cni network conf syncer"
Aug 21 15:37:40 k8s-master containerd[466528]: time="2021-08-21T15:37:40.816368754+08:00" level=info msg="Start streaming server"
Aug 21 15:37:58 k8s-master containerd[466528]: time="2021-08-21T15:37:58.150714933+08:00" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:nydus-container,Uid:,Namespace:,Attempt:0,}"
Aug 21 15:37:58 k8s-master containerd[466528]: time="2021-08-21T15:37:58.339257042+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nydus-container,Uid:,Namespace:,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to create containerd container: failed to create snapshot: missing parent "k8s.io/2/sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770" bucket: not found"
5. snapshotter containerd-nydus-grpc
containerd-nydus-grpc --nydusd-path /usr/bin/nydusd --config-path /etc/nydusd-config.json --log-level trace --root /var/lib/containerd/io.containerd.snapshotter.v1.nydus --address /run/containerd/containerd-nydus-grpc.sock
{"level":"info","msg":"found 0 daemons running","time":"2021-08-21T15:37:34.860829122+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-206497482","level":"info","msg":"cleanupSnapshotDirectory /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-206497482","time":"2021-08-21T15:37:58.229247885+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-206497482","error":"failed to get daemon by snapshotID (new-206497482)","level":"error","msg":"failed to unmount","time":"2021-08-21T15:37:58.229320240+08:00"}
{"level":"info","msg":"umount nydus daemon of id new-206497482, mountpoint /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-206497482","time":"2021-08-21T15:37:58.229383913+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-206497482","error":"failed to get daemon by snapshotID (new-206497482)","level":"error","msg":"failed to unmount","time":"2021-08-21T15:37:58.229437402+08:00"}
{"key":"k8s.io/50/extract-231584148-3QGH sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770","level":"info","msg":"prepare key k8s.io/50/extract-231584148-3QGH sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770 parent labels","parent":"","time":"2021-08-21T15:37:58.235293375+08:00"}
{"key":"k8s.io/50/extract-231584148-3QGH sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770","level":"info","msg":"prepare for container layer k8s.io/50/extract-231584148-3QGH sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770","parent":"","time":"2021-08-21T15:37:58.235357255+08:00"}
{"level":"info","msg":"id 7 is data layer, continue to check parent layer","time":"2021-08-21T15:37:58.235391750+08:00"}
{"level":"info","msg":"id 7 is data layer, continue to check parent layer","time":"2021-08-21T15:37:58.235433881+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-387506828","level":"info","msg":"cleanupSnapshotDirectory /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-387506828","time":"2021-08-21T15:37:58.268876238+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-387506828","error":"failed to get daemon by snapshotID (new-387506828)","level":"error","msg":"failed to unmount","time":"2021-08-21T15:37:58.268939370+08:00"}
{"level":"info","msg":"umount nydus daemon of id new-387506828, mountpoint /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-387506828","time":"2021-08-21T15:37:58.268949007+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-387506828","error":"failed to get daemon by snapshotID (new-387506828)","level":"error","msg":"failed to unmount","time":"2021-08-21T15:37:58.268954242+08:00"}
{"level":"info","msg":"cleanup: dirs=[/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/7]","time":"2021-08-21T15:37:58.331032998+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/7","level":"info","msg":"cleanupSnapshotDirectory /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/7","time":"2021-08-21T15:37:58.331099764+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/7","error":"failed to get daemon by snapshotID (7)","level":"error","msg":"failed to unmount","time":"2021-08-21T15:37:58.331257899+08:00"}
{"level":"info","msg":"umount nydus daemon of id 7, mountpoint /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/7","time":"2021-08-21T15:37:58.331278128+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/7","error":"failed to get daemon by snapshotID (7)","level":"error","msg":"failed to unmount","time":"2021-08-21T15:37:58.331285608+08:00"}
6. It is normal to start a normal container without nydus.
ctr run -d 192.168.1.130:8099/test/nginx:alpine nginx-alpine
nerdctl ps
nginx-alpine 192.168.1.130:8099/test/nginx:alpine "/docker-entrypoint.…" 13 seconds ago Up
If a non-canonical path, e.g. ./mydebug/mntpoint
is passed, the above function failed to find the mount point as expected.
Looks like we need to ensure is_mounted()
get a canonical path, either by itself or the caller.
clap now offers a neater derive API, which can help us save hundreds of LOC in nydusd, nydus-image, etc.
use clap::Parser;
/// Simple program to greet a person
#[derive(Parser, Debug)]
#[clap(author, version, about, long_about = None)]
struct Args {
/// Name of the person to greet
#[clap(short, long)]
name: String,
/// Number of times to greet
#[clap(short, long, default_value_t = 1)]
count: u8,
}
fn main() {
let args = Args::parse();
for _ in 0..args.count {
println!("Hello {}!", args.name)
}
}
The logic seems like:
fs.manager.NewDaemon(d)
)So it cause pid of daemon in storage always be zero.
Per trivy's README, trivy can scan a container image, filesystems directory and directories containing IaC files such as Terraform and Dockerfile for vulnerabilities.
We need to add nydus support in trivy to make trivy scan a nydus image or a nydus bootstrap possible.
Simply specify an image name (and a tag).
$ trivy image [YOUR_IMAGE_NAME]
For example:
$ trivy image python:3.4-alpine
I follow https://github.com/dragonflyoss/image-service/blob/master/docs/containerd-env-setup.md to test nydus-snapshotter. When I not use --shared-daemon, it will run ok. but when i use --shared-daemon, it will run failed.
code fstype.push_str(".") in utils/src/fuse.rs:269:9 should correct with fstype.push('.')
Detail message as below:
[image-service]# make
# TODO: switch to --out-dir when it moves to stable
# For now we build with separate target directories
cargo build --features=virtiofsd --target-dir target-virtiofsd
Finished dev [unoptimized + debuginfo] target(s) in 0.09s
cargo clippy --features=virtiofsd --target-dir target-virtiofsd -- -Dclippy::all
Finished dev [unoptimized + debuginfo] target(s) in 0.08s
cargo build --features=fusedev --target-dir target-fusedev
Compiling nydus-utils v0.1.0 (/home/RustProjects/image-service/utils)
Compiling rafs v0.1.0 (/home/RustProjects/image-service/rafs)
Compiling nydus-api v0.1.0 (/home/RustProjects/image-service/api)
Compiling nydus-rs v0.1.0 (/home/RustProjects/image-service)
Finished dev [unoptimized + debuginfo] target(s) in 5.12s
cargo clippy --features=fusedev --target-dir target-fusedev -- -Dclippy::all
Checking nydus-utils v0.1.0 (/home/RustProjects/image-service/utils)
error: calling `push_str()` using a single-character string literal
--> utils/src/fuse.rs:269:9
|
269 | fstype.push_str(".");
| ^^^^^^^^^^^^^^^^^^^^ help: consider using `push` with a character literal: `fstype.push('.')`
|
= note: `-D clippy::single-char-push-str` implied by `-D clippy::all`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#single_char_push_str
error: aborting due to previous error
error: could not compile `nydus-utils`
To learn more, run the command again with --verbose.
make: *** [Makefile:17: build-fusedev] Error 101
[image-service]# cargo version
cargo 1.48.0 (65cbdd2dc 2020-10-14)
[ image-service]# rustc --version
rustc 1.48.0 (7eac88abb 2020-11-16)
Nydus-snapshotter's option config-path
is required now, where actually only the registry/OSS auth has to be passed to nydusd.
But nydus-snapshotter
now can take in auth from local host's docker configuration. It means nydus-snapshotter can make up a comprehensive json configuration file for nydusd itself. So end users can skip the configuration step. It is convenient.
&cli.StringFlag{
Name: "config-path",
Required: true,
Usage: "path to the configuration file",
Destination: &args.ConfigPath,
},`
I met Fatal error, when I nydusify convert image. error msg as below:
[BLOB sha256:5bed26d33875e6da1d9ff9a1054c5fef3bbeb22ee979e14b72acf72528de007b] Pushed --- [==============================[BLOB sha256:5bed26d33875e6da1d9ff9a1054c5fef3bbeb22ee979e14b72acf72528de007b] Pushed --- [==============================[BLOB sha256:5bed26d33875e6da1d9ff9a1054c5fef3bbeb22ee979e14b72acf72528de007b] Pushed --- [==============================[BLOB sha256:5bed26d33875e6da1d9ff9a1054c5fef3bbeb22ee979e14b72acf72528de007b] Pushed --- [====================================================================] 100% 38 MB/38 MB
[BLOB sha256:f11b29a9c7306674a9479158c1b4259938af11b97359d9ac02030cc1095e9ed1] Pushed --- [====================================================================] 100% 52 kB/52 kB
[BLOB sha256:930bda195c84cf132344bf38edcad255317382f910503fef234a9ce3bff0f4dd] Pushed --- [====================================================================] 100% 484 B/484 B
[BLOB sha256:78bf9a5ad49e4ae42a83f4995ade4efc096f78fd38299cf05bc041e8cdda2a36] Pushed --- [====================================================================] 100% 7 B/7 B
[BOOT sha256:d7a40deeeb427d164874654efdd48ecbcefdba45740d3c69040e7fe35b93186a] Pushed --- [====================================================================] 100% 268 kB/268 kB
[MANI sha256:038c4c25f6d27215061cbd708037614923c2b1a32b129560d59d4884ee1df0d5] Pushing --- [--------------------------------------------------------------------] 0% 0 B/100 B
FATA[2021-01-27T01:34:32Z] push target image: PUT http://10.3.41.10:80/v2/nydusd/ubuntu-nydus/manifests/18.04: MANIFEST_INVALID: manifest invalid; map[]
My whole work as below:
I have setup a private harbor registry, the registry as 10.3.4.10:80
.
(1) I have tested with docker login/pull/push
. In a word, the harbor registry is ok.
(2) I configured the /etc/containerd/config.toml as below:
[proxy_plugins]
[proxy_plugins.nydus]
type = "snapshot"
address = "/run/containerd/containerd-nydus-grpc.sock"
[plugins]
[plugins.cri.cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
# Use nydus as default snapshot through CRI
[plugins.cri]
sandbox_image = "10.3.4.10/nydusd/pause:3.2"
[plugins.cri.containerd]
snapshotter = "nydus"
disable_snapshot_annotations = false
[plugins.cri.registry]
[plugins.cri.registry.configs]
[plugins.cri.registry.configs."10.3.4.10:80".tls]
insecure_skip_verify = true
#[plugins.cri.registry.configs."10.3.4.10:80".auth]
# username = "admin"
# password = "12345"
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."10.3.4.10:80"]
endpoint = ["http://10.3.4.10:80"]
(3) crictl pull images ok
# crictl images
IMAGE TAG IMAGE ID SIZE
10.3.4.10:80/nydusd/busybox latest b97242f89c8a2 767kB
10.3.4.10:80/nydusd/ubuntu 18.04 4e5021d210f65 26.7MB
(4) nydusd config as below
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"host": "10.3.4.10:80",
"timeout": 5,
"connect_timeout": 5,
"retry_limit": 0
}
},
"cache": {
"type": "blobcache",
"config": {
"work_dir": "cache"
}
}
},
"mode": "direct",
"digest_validate": false,
"iostats_files": true,
"enable_xattr": false,
"fs_prefetch": {
"enable": true,
"threads_count": 10
}
}
(5) nydusd running ok
(6) nydusify convert failed
# nydusify convert --nydus-image /usr/local/bin/nydus-image --source-insecure --source 10.3.4.10:80/nydusd/ubuntu:18.04 --target-insecure --target 10.3.4.10:80/nydusd/ubuntu-nydus:18.04
[BLOB sha256:5bed26d33875e6da1d9ff9a1054c5fef3bbeb22ee979e14b72acf72528de007b] Pushed --- [==============================[BLOB sha256:5bed26d33875e6da1d9ff9a1054c5fef3bbeb22ee979e14b72acf72528de007b] Pushed --- [==============================[BLOB sha256:5bed26d33875e6da1d9ff9a1054c5fef3bbeb22ee979e14b72acf72528de007b] Pushed --- [==============================[BLOB sha256:5bed26d33875e6da1d9ff9a1054c5fef3bbeb22ee979e14b72acf72528de007b] Pushed --- [====================================================================] 100% 38 MB/38 MB
[BLOB sha256:f11b29a9c7306674a9479158c1b4259938af11b97359d9ac02030cc1095e9ed1] Pushed --- [====================================================================] 100% 52 kB/52 kB
[BLOB sha256:930bda195c84cf132344bf38edcad255317382f910503fef234a9ce3bff0f4dd] Pushed --- [====================================================================] 100% 484 B/484 B
[BLOB sha256:78bf9a5ad49e4ae42a83f4995ade4efc096f78fd38299cf05bc041e8cdda2a36] Pushed --- [====================================================================] 100% 7 B/7 B
[BOOT sha256:d7a40deeeb427d164874654efdd48ecbcefdba45740d3c69040e7fe35b93186a] Pushed --- [====================================================================] 100% 268 kB/268 kB
[MANI sha256:038c4c25f6d27215061cbd708037614923c2b1a32b129560d59d4884ee1df0d5] Pushing --- [--------------------------------------------------------------------] 0% 0 B/100 B
FATA[2021-01-27T01:34:32Z] push target image: PUT http://10.3.4.10:80/v2/nydusd/ubuntu-nydus/manifests/18.04: MANIFEST_INVALID: manifest invalid; map[]
Please give some advice. Thank you!
As nydus-snapshotter has been accepted by containerd community containerd/project#83.
We are going to migrate nydus-snapshotter to a new repo https://github.com/containerd/nydus-snapshotter under containerd soon.
It seems always set nydusd log level to be info
.
I use the following command to complete the image conversion and push it to harbor
/root/Nydus/bin/nydusify convert \
--containerd-sock /run/docker/containerd/docker-containerd.sock \
--source-auth bmV3bGFuZDpOZXdsYW5kQDEyMw== \
--nydus-image /root/Nydus/bin/nydus-image \
--source 172.32.150.15/nydus/nginx_tyt:1.0.0 \
--target 172.32.150.15/paas_public/nginx_tyt:1.0.0-nydus \
--source-insecure true
containerd-nydus-grpc and containerd have also been configured and restarted according to the documentation.
But when I tried to pull the image out of Gabor, there was an exception。
[root@vcapp23 ~]# docker pull 172.32.150.15:80/paas_public/nginx_tyt:1.0.0-nydus
Error response from daemon: mediaType in manifest should be 'application/vnd.docker.distribution.manifest.v2+json' not ''
I tried to use k8s to run this image, but the image pull exception also occurred。
How can I use the converted image to run a container?
To make them more robust.
1. the question: crictl runp container-config.yaml pod-config.yaml error
FATA[0000] run pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: OCI runtime create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/k8s.io/769b91ee7615e3fcab82235c0ab0c5bf23cddbc7cc9ce22e1d82d7076e62118c" instead: unknown
2. configuration information
vi /etc/containerd/config.toml
version = 2
root = "/var/lib/containerd"
state = "/run/containerd"
[proxy_plugins]
[proxy_plugins.nydus]
type = "snapshot"
address = "/run/containerd/containerd-nydus-grpc.sock"
......
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2"
systemd_cgroup = true
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "nydus"
disable_snapshot_annotations = false
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runtime.v1.linux"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
......
vi /etc/nydusd-config.json
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"host": "192.168.1.130:8099",
"auth": "YWRtaW46SGFyYm9yMTIzNDU=",
"timeout": 5,
"connect_timeout": 5,
"retry_limit": 0
}
},
"cache": {
"type": "blobcache",
"compressed": true,
"config": {
"work_dir": "cache"
}
}
},
"mode": "direct",
"digest_validate": false,
"iostats_files": true,
"enable_xattr": false,
"fs_prefetch": {
"enable": true,
"threads_count": 10,
"bandwidth_rate": 1048576
}
}
3. configuration information
runc -v
runc version 1.0.0
commit: v1.0.0-0-g84113eef
spec: 1.0.2-dev
go: go1.15.14
libseccomp: 2.3.1
containerd -v
containerd github.com/containerd/containerd v1.4.8 7eba5930496d9bbe375fdf71603e610ad737d2b2
4. service status
systemctl status containerd -l
● containerd.service - containerd container runtime
Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2021-08-21 11:32:34 CST; 2min 20s ago
Docs: https://containerd.io
Process: 182847 ExecStartPre=/sbin/modprobe overlay (code=exited, status=1/FAILURE)
Main PID: 182851 (containerd)
Tasks: 14
Memory: 23.2M
CGroup: /system.slice/containerd.service
└─182851 /usr/local/bin/containerd
Aug 21 11:32:34 k8s-master containerd[182851]: time="2021-08-21T11:32:34.348941427+08:00" level=warning msg="The image sha256:7ce0143dee376bfd2937b499a46fb110bda3c629c195b84b1cf6e19be1a9e23b is not unpacked."
Aug 21 11:32:34 k8s-master containerd[182851]: time="2021-08-21T11:32:34.351974462+08:00" level=info msg="Start event monitor"
Aug 21 11:32:34 k8s-master containerd[182851]: time="2021-08-21T11:32:34.351999856+08:00" level=info msg="Start snapshots syncer"
Aug 21 11:32:34 k8s-master containerd[182851]: time="2021-08-21T11:32:34.352008398+08:00" level=info msg="Start cni network conf syncer"
Aug 21 11:32:34 k8s-master containerd[182851]: time="2021-08-21T11:32:34.352012637+08:00" level=info msg="Start streaming server"
Aug 21 11:33:07 k8s-master containerd[182851]: time="2021-08-21T11:33:07.719723332+08:00" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:nydus-container,Uid:,Namespace:,Attempt:0,}"
Aug 21 11:33:07 k8s-master containerd[182851]: time="2021-08-21T11:33:07.819571912+08:00" level=warning msg="runtime v1 is deprecated since containerd v1.4, consider using runtime v2"
Aug 21 11:33:07 k8s-master containerd[182851]: time="2021-08-21T11:33:07.821175602+08:00" level=info msg="shim containerd-shim started" address="unix:///run/containerd/s/2912f456fe3662db1686fa2e03cfca1cbf5eea59be26242d73ebcbb4e677f91e" debug=false pid=184112
Aug 21 11:33:07 k8s-master containerd[182851]: time="2021-08-21T11:33:07.846993728+08:00" level=info msg="shim reaped" id=898a5ff42e28f8b140d3b39f95d36529eafd7775a4ced1b74c077217364dfd8e
Aug 21 11:33:07 k8s-master containerd[182851]: time="2021-08-21T11:33:07.961343539+08:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nydus-container,Uid:,Namespace:,Attempt:0,} failed, error" error="failed to create containerd task: OCI runtime create failed: expected cgroupsPath to be of format "slice:prefix:name" for systemd cgroups, got "/k8s.io/898a5ff42e28f8b140d3b39f95d36529eafd7775a4ced1b74c077217364dfd8e" instead: unknown"
5. snapshotter containerd-nydus-grpc
containerd-nydus-grpc --nydusd-path /usr/bin/nydusd --config-path /etc/nydusd-config.json --log-level trace --root /var/lib/containerd/io.containerd.snapshotter.v1.nydus --address /run/containerd/containerd-nydus-grpc.sock
{"level":"info","msg":"found 0 daemons running","time":"2021-08-21T11:32:48.792370509+08:00"}
{"key":"k8s.io/24/898a5ff42e28f8b140d3b39f95d36529eafd7775a4ced1b74c077217364dfd8e","level":"info","msg":"prepare key k8s.io/24/898a5ff42e28f8b140d3b39f95d36529eafd7775a4ced1b74c077217364dfd8e parent k8s.io/2/sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770 labels","parent":"k8s.io/2/sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770","time":"2021-08-21T11:33:07.808490522+08:00"}
{"key":"k8s.io/24/898a5ff42e28f8b140d3b39f95d36529eafd7775a4ced1b74c077217364dfd8e","level":"info","msg":"prepare for container layer k8s.io/24/898a5ff42e28f8b140d3b39f95d36529eafd7775a4ced1b74c077217364dfd8e","parent":"k8s.io/2/sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770","time":"2021-08-21T11:33:07.808615638+08:00"}
{"level":"info","msg":"id 18 is data layer, continue to check parent layer","time":"2021-08-21T11:33:07.808633407+08:00"}
{"level":"info","msg":"id 1 is data layer, continue to check parent layer","time":"2021-08-21T11:33:07.808643524+08:00"}
{"level":"info","msg":"id 18 is data layer, continue to check parent layer","time":"2021-08-21T11:33:07.808667378+08:00"}
{"level":"info","msg":"id 1 is data layer, continue to check parent layer","time":"2021-08-21T11:33:07.808734520+08:00"}
{"level":"info","msg":"mount options [workdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/18/work upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/18/fs lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/1/fs]","time":"2021-08-21T11:33:07.808782298+08:00"}
{"level":"info","msg":"id 18 is data layer, continue to check parent layer","time":"2021-08-21T11:33:07.818018486+08:00"}
{"level":"info","msg":"id 1 is data layer, continue to check parent layer","time":"2021-08-21T11:33:07.818120232+08:00"}
{"level":"info","msg":"id 18 is data layer, continue to check parent layer","time":"2021-08-21T11:33:07.818148765+08:00"}
{"level":"info","msg":"id 1 is data layer, continue to check parent layer","time":"2021-08-21T11:33:07.818156486+08:00"}
{"level":"info","msg":"mount options [workdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/18/work upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/18/fs lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/1/fs]","time":"2021-08-21T11:33:07.818193849+08:00"}
{"level":"info","msg":"cleanup: dirs=[/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/18]","time":"2021-08-21T11:33:07.979308434+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/18","level":"info","msg":"cleanupSnapshotDirectory /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/18","time":"2021-08-21T11:33:07.982032025+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/18","error":"failed to get daemon by snapshotID (18)","level":"error","msg":"failed to unmount","time":"2021-08-21T11:33:07.983769369+08:00"}
{"level":"info","msg":"umount nydus daemon of id 18, mountpoint /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/18","time":"2021-08-21T11:33:07.983817167+08:00"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/18","error":"failed to get daemon by snapshotID (18)","level":"error","msg":"failed to unmount","time":"2021-08-21T11:33:07.983844400+08:00"}
zstd is a widely adopted compression algorithm, compared to lz4, it offers both speed and better compression ratio.
Hi, nydus compiles failing with rustc or cargo v1.49.0.
[root@fedora33 image-service]# cargo version
cargo 1.49.0 (d00d64df9 2020-12-05)
[root@fedora33 image-service]# rustc --version
rustc 1.49.0 (e1884a8e3 2020-12-29)
[root@fedora33 image-service]# git log
commit 789e7331db084ad0953cbc9ac769fda213d85fdc (HEAD -> master, origin/master, origin/HEAD)
...
[root@fedora33 image-service]# make
# TODO: switch to --out-dir when it moves to stable
# For now we build with separate target directories
cargo build --features=virtiofsd --target-dir target-virtiofsd
Finished dev [unoptimized + debuginfo] target(s) in 0.40s
cargo clippy --features=virtiofsd --target-dir target-virtiofsd -- -Dclippy::all
Checking rafs v0.1.0 (/root/RustProjects/dragonflyoss/image-service/rafs)
error: `.map().collect()` can be replaced with `.try_for_each()`
--> rafs/src/metadata/layout.rs:653:9
|
653 | / self.entries
654 | | .iter()
655 | | .enumerate()
656 | | .map(|(idx, entry)| {
... |
667 | | })
668 | | .collect::<Result<()>>()?;
| |____________________________________^
|
= note: `-D clippy::map-collect-result-unit` implied by `-D clippy::all`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#map_collect_result_unit
help: try this
|
653 | self.entries
654 | .iter()
655 | .enumerate().try_for_each(|(idx, entry)| {
656 | w.write_all(&u32::to_le_bytes(entry.readahead_offset))?;
657 | w.write_all(&u32::to_le_bytes(entry.readahead_size))?;
658 | w.write_all(entry.blob_id.as_bytes())?;
...
error: comparison to empty slice
--> rafs/src/storage/backend/oss.rs:81:12
|
81 | if canonicalized_oss_headers != "" {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: using `!is_empty` is clearer and more explicit: `!canonicalized_oss_headers.is_empty()`
|
= note: `-D clippy::comparison-to-empty` implied by `-D clippy::all`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#comparison_to_empty
error: comparison to empty slice
--> rafs/src/storage/backend/oss.rs:102:12
|
102 | if self.bucket_name != "" {
| ^^^^^^^^^^^^^^^^^^^^^^ help: using `!is_empty` is clearer and more explicit: `!self.bucket_name.is_empty()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#comparison_to_empty
error: comparison to empty slice
--> rafs/src/storage/backend/oss.rs:110:22
|
110 | let url = if self.bucket_name != "" {
| ^^^^^^^^^^^^^^^^^^^^^^ help: using `!is_empty` is clearer and more explicit: `!self.bucket_name.is_empty()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#comparison_to_empty
error: comparison to empty slice
--> rafs/src/storage/backend/registry.rs:353:12
|
353 | if cached_auth != "" {
| ^^^^^^^^^^^^^^^^^ help: using `!is_empty` is clearer and more explicit: `!cached_auth.is_empty()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#comparison_to_empty
error: comparison to empty slice
--> rafs/src/storage/backend/registry.rs:483:24
|
483 | if self.blob_url_scheme != "" {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ help: using `!is_empty` is clearer and more explicit: `!self.blob_url_scheme.is_empty()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#comparison_to_empty
error: comparison to empty slice
--> rafs/src/storage/backend/request.rs:138:31
|
138 | let ping_url = if config.proxy.ping_url != "" {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: using `!is_empty` is clearer and more explicit: `!config.proxy.ping_url.is_empty()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#comparison_to_empty
error: aborting due to 7 previous errors
error: could not compile `rafs`
To learn more, run the command again with --verbose.
make: *** [Makefile:13: build-virtiofsd] Error 101
But I have fixed it by myself. I can submit a PR, if needed!
when I test containerd
with nydus-snapshotter
as the doc of this repository below:
https://github.com/dragonflyoss/image-service/blob/master/docs/containerd-env-setup.md
crictl --runtime-endpoint=unix:///run/containerd/containerd.sock run container-config.yaml pod-config.yaml
the problem:
failed to get sandbox image "k8s.gcr.io/pause:3.2"
I can pull the image using docker.
How can I setting the pause
image that can be pulled by crictl
ctr images pull --plain-http 192.168.1.130:8099/test/nginx:alpine
ctr images push --plain-http 192.168.1.130:8099/test/nginx:alpine
it is successful..
but
nydusify convert --nydus-image /usr/bin/nydus-image --source nginx:alpine --target 192.168.1.130:8099/test/nginx-nydus --target-insecure
The convert failed, what is the reason? Is there any configuration missing? Eagerly looking forward to reply.
vi /etc/containerd/config.toml
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry.cn-hangzhou.aliyuncs.com"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.1.130:8099"]
endpoint = ["http://192.168.1.130:8099"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.1.130:8099"]
[plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.1.130:8099".tls]
insecure_skip_verify = true
[plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.1.130:8099".auth]
username = "admin"
password = "Harbor12345"
/etc/nydusd-config.json
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"host": "192.168.1.130:8099",
"auth": "YWRtaW46SGFyYm9yMTIzNDU=",
"timeout": 5,
"connect_timeout": 5,
"retry_limit": 0
}
},
"cache": {
"type": "blobcache",
"compressed": true,
"config": {
"work_dir": "cache"
}
}
},
"mode": "direct",
"digest_validate": false,
"iostats_files": true,
"enable_xattr": false,
"fs_prefetch": {
"enable": true,
"threads_count": 10,
"bandwidth_rate": 1048576
}
}
runc -v
runc version 1.0.0
commit: v1.0.0-0-g84113eef
spec: 1.0.2-dev
go: go1.15.14
libseccomp: 2.3.1
containerd -v
containerd github.com/containerd/containerd v1.4.8 7eba5930496d9bbe375fdf71603e610ad737d2b2
What is the relationship between Nydus with this Issue containerd/containerd#2267?
Hi, I follow instruction doc Nydus Setup for Containerd Environment
to test serveral times, but it's never successful.
# make release
# make nydusify
# make nydus-snapshotter
# sudo cp target-fusedev/release/nydusd target-fusedev/release/nydus-image /usr/local/bin
# sudo cp contrib/nydusify/cmd/nydusify contrib/nydus-snapshotter/bin/containerd-nydus-grpc /usr/local/bin
# cat /etc/nydusd-config.json
{
"device": {
"backend": {
"type": "registry",
"config": {
"scheme": "http",
"timeout": 5,
"connect_timeout": 5,
"retry_limit": 0
}
},
"cache": {
"type": "blobcache",
"config": {
"work_dir": "cache"
}
}
},
"mode": "direct",
"digest_validate": false,
"iostats_files": true,
"enable_xattr": false,
"fs_prefetch": {
"enable": true,
"threads_count": 10
}
}
# /usr/local/bin/containerd-nydus-grpc --nydusd-path /usr/local/bin/nydusd --config-path /etc/nydusd-config.json --shared-daemon --log-level trace --root /var/lib/containerd/io.containerd.snapshotter.v1.nydus --address /run/containerd/containerd-nydus-grpc.sock
{"daemon":"shared_daemon","level":"debug","msg":"error: The following required arguments were not provided:","time":"2021-01-21T03:08:22.736008270Z"}
{"daemon":"shared_daemon","level":"debug","msg":" --config \u003cconfig\u003e","time":"2021-01-21T03:08:22.736400645Z"}
{"daemon":"shared_daemon","level":"debug","msg":"","time":"2021-01-21T03:08:22.736502936Z"}
{"daemon":"shared_daemon","level":"debug","msg":"USAGE:","time":"2021-01-21T03:08:22.736560103Z"}
{"daemon":"shared_daemon","level":"debug","msg":" nydusd --apisock \u003capisock\u003e --config \u003cconfig\u003e --log-level \u003clog-level\u003e --mountpoint \u003cmountpoint\u003e --rlimit-nofile \u003crlimit-nofile\u003e --thread-num \u003cthreads\u003e","time":"2021-01-21T03:08:22.736602724Z"}
{"daemon":"shared_daemon","level":"debug","msg":"","time":"2021-01-21T03:08:22.736649061Z"}
{"daemon":"shared_daemon","level":"debug","msg":"For more information try --help","time":"2021-01-21T03:08:22.736679489Z"}
{"daemon":"shared_daemon","level":"info","msg":"quits","time":"2021-01-21T03:08:22.736709614Z"}
...
[proxy_plugins]
[proxy_plugins.nydus]
type = "snapshot"
address = "/run/containerd/containerd-nydus-grpc.sock"
...
[plugins.cri]
[plugins.cri.containerd]
snapshotter = "nydus"
disable_snapshot_annotations = false
# docker run -d -p 5000:5000 --restart=always --name registry registry:2.7
nydusify convert --nydus-image /usr/local/bin/nydus-image --source ubuntu:18.04 --target localhost:5000/ubuntu-nydus:18.04
[BLOB sha256:f22ccc0b8772d8e1bcb40f137b373686bc27427a70c0e41dd22b38016e09e7e0] Pushed --- [====================================================================] 100% 38 MB/38 MB
[BLOB sha256:3cf8fb62ba5ffb221a2edb2208741346eb4d2d99a174138e4afbb69ce1fd9966] Pushed --- [====================================================================] 100% 484 B/484 B
[BLOB sha256:e80c964ece6a3edf0db1cfc72ae0e6f0699fb776bbfcc92b708fbb945b0b9547] Pushed --- [====================================================================] 100% 7 B/7 B
[BOOT sha256:268351132cd895d48f399cf87b7d72fa853a72971587dc99382f7294b4a7704b] Pushed --- [====================================================================] 100% 268 kB/268 kB
[MANI sha256:1f2f82b75db5646934931901404e0e01c8eeb5f322bd2e3a3f20aa2aba0b4077] Pushed --- [====================================================================] 100% 100 B/100 B
INFO[2021-01-21T03:16:40Z] Success convert image ubuntu:18.04 to localhost:5000/ubuntu-nydus:18.04
# cat container-config.yaml
metadata:
name: nydus-container
image:
image: localhost:5000/ubuntu-nydus:18.04
command:
- /bin/sleep
args:
- 600
log_path: container.1.log
# cat pod-config.yaml
metadata:
attempt: 1
name: local-registry
namespace: default
uid: jdhshd83djaidwnduwk28bcsb
log_directory: /tmp
ERRORs as below:
# crictl run container-config.yaml pod-config.yaml
FATA[0000] running container: run pod sandbox: rpc error: code = NotFound desc = failed to create containerd container: failed to create snapshot: missing parent "k8s.io/2/sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770" bucket: not found
At the same time nydus grpc error msg as below:
...
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-090215677","level":"info","msg":"cleanupSnapshotDirectory /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-090215677","time":"2021-01-21T03:20:00.358090237Z"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-090215677","error":"failed to get daemon by snapshotID (new-090215677)","level":"error","msg":"failed to unmount","time":"2021-01-21T03:20:00.358347269Z"}
{"level":"info","msg":"umount nydus daemon of id new-090215677, mountpoint /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-090215677","time":"2021-01-21T03:20:00.358408818Z"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-090215677","error":"failed to get daemon by snapshotID (new-090215677)","level":"error","msg":"failed to unmount","time":"2021-01-21T03:20:00.358503005Z"}
{"key":"k8s.io/20/extract-360727213-SRAa sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770","level":"info","msg":"prepare key k8s.io/20/extract-360727213-SRAa sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770 parent labels","parent":"","time":"2021-01-21T03:20:00.415524768Z"}
{"key":"k8s.io/20/extract-360727213-SRAa sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770","level":"info","msg":"prepare for container layer k8s.io/20/extract-360727213-SRAa sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770","parent":"","time":"2021-01-21T03:20:00.415616963Z"}
{"level":"info","msg":"id 4 is data layer, continue to check parent layer","time":"2021-01-21T03:20:00.415654346Z"}
{"level":"info","msg":"id 4 is data layer, continue to check parent layer","time":"2021-01-21T03:20:00.415677578Z"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-690295351","level":"info","msg":"cleanupSnapshotDirectory /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-690295351","time":"2021-01-21T03:20:00.588210957Z"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-690295351","error":"failed to get daemon by snapshotID (new-690295351)","level":"error","msg":"failed to unmount","time":"2021-01-21T03:20:00.588280298Z"}
{"level":"info","msg":"umount nydus daemon of id new-690295351, mountpoint /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-690295351","time":"2021-01-21T03:20:00.588301422Z"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/new-690295351","error":"failed to get daemon by snapshotID (new-690295351)","level":"error","msg":"failed to unmount","time":"2021-01-21T03:20:00.588317896Z"}
{"level":"info","msg":"cleanup: dirs=[/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/4]","time":"2021-01-21T03:20:01.640877273Z"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/4","level":"info","msg":"cleanupSnapshotDirectory /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/4","time":"2021-01-21T03:20:01.640985337Z"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/4","error":"failed to get daemon by snapshotID (4)","level":"error","msg":"failed to unmount","time":"2021-01-21T03:20:01.641019203Z"}
{"level":"info","msg":"umount nydus daemon of id 4, mountpoint /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/4","time":"2021-01-21T03:20:01.641045895Z"}
{"dir":"/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/4","error":"failed to get daemon by snapshotID (4)","level":"error","msg":"failed to unmount","time":"2021-01-21T03:20:01.641067663Z"}
Other Info
# containerd --version
containerd github.com/containerd/containerd v1.4.3 269548fa27e0089a8b8278fc4fc781d7f65a939b
# uname -a
Linux host-10-33-41-246 4.19.168 #1 SMP Tue Jan 19 07:22:27 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Please give me help, Thanks!
We have quite a few places where we hand write error implementations. We can use thiserror
(https://docs.rs/thiserror/latest/thiserror/) to simply them.
$ nerdctl --snapshotter nydus run --rm -it ghcr.io/dragonflyoss/image-service/alpine:nydus-latest sh
ghcr.io/dragonflyoss/image-service/alpine:nydus-latest: resolved |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:6188fc0d382e64fff54f936a9211e20ad6f6839b17171cc3d2faedae2463de9d: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:04788cb4b6484c752c56874e6fe5e973d6360304879098b2d28309244455d6f8: downloading |--------------------------------------| 0.0 B/373.0 B
elapsed: 4.0 s total: 1.0 Ki (262.0 B/s)
FATA[0004] failed to prepare extraction snapshot "extract-595858326-ZU8W sha256:1056951c8e08d6d8c1689d919c0454b083ff0cd0bca52fdb16c4b6ebcf1e86cc": failed to find image ref of snapshot 88, labels map[containerd.io/snapshot.ref:sha256:1056951c8e08d6d8c1689d919c0454b083ff0cd0bca52fdb16c4b6ebcf1e86cc containerd.io/snapshot/nydus-blob-ids:["5038f9ebd22cf8b77575df856f95b2f31161d745a2ab56825c263d8a3043a0ff"] containerd.io/snapshot/nydus-bootstrap:true]: unknown
Snapshotter:
$ /usr/local/bin/containerd-nydus-grpc --config-path /etc/nydusd-config.json --shared-daemon --log-level info --root /var/lib/containerd/io.containerd.snapshotter.v1.nydus --cache-dir /var/lib/nydus/cache --nydusd-path /usr/local/bin/nydusd --nydusimg-path /usr/local/bin/nydus-image --log-to-stdout
INFO[2022-01-21T10:33:47.839760988+08:00] gc goroutine start...
INFO[2022-01-21T10:33:47.839952767+08:00] found 0 daemons running
INFO[2022-01-21T10:34:00.587861343+08:00] prepare key default/172/extract-582324985-ByGC sha256:5038f9ebd22cf8b77575df856f95b2f31161d745a2ab56825c263d8a3043a0ff parent labels key="default/172/extract-582324985-ByGC sha256:5038f9ebd22cf8b77575df856f95b2f31161d745a2ab56825c263d8a3043a0ff" parent=
INFO[2022-01-21T10:34:00.587954757+08:00] nydus data layer, skip download and unpack default/172/extract-582324985-ByGC sha256:5038f9ebd22cf8b77575df856f95b2f31161d745a2ab56825c263d8a3043a0ff key="default/172/extract-582324985-ByGC sha256:5038f9ebd22cf8b77575df856f95b2f31161d745a2ab56825c263d8a3043a0ff" parent=
INFO[2022-01-21T10:34:00.601187548+08:00] prepare key default/173/extract-595858326-ZU8W sha256:1056951c8e08d6d8c1689d919c0454b083ff0cd0bca52fdb16c4b6ebcf1e86cc parent sha256:5038f9ebd22cf8b77575df856f95b2f31161d745a2ab56825c263d8a3043a0ff labels key="default/173/extract-595858326-ZU8W sha256:1056951c8e08d6d8c1689d919c0454b083ff0cd0bca52fdb16c4b6ebcf1e86cc" parent="sha256:5038f9ebd22cf8b77575df856f95b2f31161d745a2ab56825c263d8a3043a0ff"
INFO[2022-01-21T10:34:00.601273397+08:00] prepare for container layer default/173/extract-595858326-ZU8W sha256:1056951c8e08d6d8c1689d919c0454b083ff0cd0bca52fdb16c4b6ebcf1e86cc key="default/173/extract-595858326-ZU8W sha256:1056951c8e08d6d8c1689d919c0454b083ff0cd0bca52fdb16c4b6ebcf1e86cc" parent="sha256:5038f9ebd22cf8b77575df856f95b2f31161d745a2ab56825c263d8a3043a0ff"
INFO[2022-01-21T10:34:00.601340047+08:00] found nydus meta layer id 88, parpare remote snapshot key="default/173/extract-595858326-ZU8W sha256:1056951c8e08d6d8c1689d919c0454b083ff0cd0bca52fdb16c4b6ebcf1e86cc" parent="sha256:5038f9ebd22cf8b77575df856f95b2f31161d745a2ab56825c263d8a3043a0ff"
INFO[2022-01-21T10:34:00.601381068+08:00] prepare remote snapshot mountpoint /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/88/fs
INFO[2022-01-21T10:34:00.791220203+08:00] cleanup: dirs=[/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/88 /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/87]
INFO[2022-01-21T10:34:00.791273211+08:00] cleanupSnapshotDirectory /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/88 dir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/88
INFO[2022-01-21T10:34:00.791807028+08:00] cleanupSnapshotDirectory /var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/87 dir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/87
I do it as the offical docs!
Steps as below:
[root@hah nydus-dev]# tree tmp/ -L 2
tmp/
├── localhost:5000
│ └── ubuntu:18.04-nydus
└── ubuntu:18.04
├── manifest.json
├── sha256:3cf8fb62ba5ffb221a2edb2208741346eb4d2d99a174138e4afbb69ce1fd9966
├── sha256:e80c964ece6a3edf0db1cfc72ae0e6f0699fb776bbfcc92b708fbb945b0b9547
└── sha256:f22ccc0b8772d8e1bcb40f137b373686bc27427a70c0e41dd22b38016e09e7e0
6 directories, 1 file
and ububtu:18.04-nydus
[root@hah nydus-dev]# tree tmp/localhost\:5000/ -L 2
tmp/localhost:5000/
└── ubuntu:18.04-nydus
├── blobs
├── bootstrap
├── bootstrap-parent
├── config.json
└── manifest.json
2 directories, 4 files
[root@hah nydus-dev]# cat localfs.json
{
"device": {
"backend": {
"type": "localfs",
"config": {
"dir": "./tmp/localhost:5000/ubuntu:18.04-nydus/blobs"
}
}
},
"mode": "direct"
}
[root@hah nydus-dev]# ../image-service/target-fusedev/debug/nydusd --config localfs.json --mountpoint ./mnt/ --bootstrap tmp/localhost\:5000/ubuntu\:18.04-nydus/bootstrap --log-level trace
2021-01-18T03:25:00+00:00 - INFO - rafs superblock features: COMPRESS_LZ4_BLOCK DIGESTER_BLAKE3 EXPLICIT_UID_GID HAS_XATTR
2021-01-18T03:25:00+00:00 - INFO - rafs imported
2021-01-18T03:25:00+00:00 - INFO - rafs mounted: mode=direct digest_validate=false iostats_files=false
2021-01-18T03:25:00+00:00 - DEBUG - pseudo fs mount iterate "/"
2021-01-18T03:25:00+00:00 - TRACE - fuse: vfs fs_idx 1 inode 1 fuse ino 0x100000000000001
2021-01-18T03:25:00+00:00 - TRACE - fs_idx 1 inode 1
2021-01-18T03:25:00+00:00 - INFO - vfs mounted
2021-01-18T03:25:00+00:00 - INFO - mount source nydusfs dest /root/RustProjects/dragonflyoss/nydus-dev/mnt with fstype fuse opts default_permissions,allow_other,fd=8,rootmode=40000,user_id=0,group_id=0 fd 8
2021-01-18T03:25:00+00:00 - INFO - starting fuse daemon
2021-01-18T03:25:00+00:00 - TRACE - fuse: new req Init: InHeader { len: 56, opcode: 26, unique: 2, nodeid: 0, uid: 0, gid: 0, pid: 0, padding: 0 }
2021-01-18T03:25:00+00:00 - INFO - FUSE INIT major 7 minor 31
in_opts: ASYNC_READ | POSIX_LOCKS | ATOMIC_O_TRUNC | EXPORT_SUPPORT | BIG_WRITES | DONT_MASK | SPLICE_WRITE | SPLICE_MOVE | SPLICE_READ | FLOCK_LOCKS | HAS_IOCTL_DIR | AUTO_INVAL_DATA | DO_READDIRPLUS | READDIRPLUS_AUTO | ASYNC_DIO | WRITEBACK_CACHE | ZERO_MESSAGE_OPEN | PARALLEL_DIROPS | HANDLE_KILLPRIV | POSIX_ACL | ABORT_ERROR | MAX_PAGES | CACHE_SYMLINKS | ZERO_MESSAGE_OPENDIR | EXPLICIT_INVAL_DATA
out_opts: ASYNC_READ | ATOMIC_O_TRUNC | BIG_WRITES | HAS_IOCTL_DIR | AUTO_INVAL_DATA | DO_READDIRPLUS | READDIRPLUS_AUTO | ASYNC_DIO | WRITEBACK_CACHE | ZERO_MESSAGE_OPEN | PARALLEL_DIROPS | MAX_PAGES | CACHE_SYMLINKS | ZERO_MESSAGE_OPENDIR | EXPLICIT_INVAL_DATA
2021-01-18T03:25:00+00:00 - TRACE - fuse: new reply OutHeader { len: 80, error: 0, unique: 2 }
2021-01-18T03:25:01+00:00 - WARN - read fuse dev failed on fd 13: EINVAL: Invalid argument
2021-01-18T03:25:01+00:00 - INFO - nydusd quits
...
2021-01-18T03:25:01+00:00 - WARN - read fuse dev failed on fd 13: EINVAL: Invalid argument
....
How to solve it ?
Thank you !
Ref: https://rustsec.org/
We can let CI check for vulnerabilities dependencies and also deny improper dependencies.
I feel nydus have the same thinking of Stargz Snapshotter
https://github.com/containerd/stargz-snapshotter about fast the container bootstrap process, and stargz-snapshotter also have eStargz image format like nydus's rafs image format.
why setup a new project Nyus
, not base on the Stargz Snapshotter? Is there some doc about explain the relationship between Nydus
and Stargz Snapshotter
?
Two nydus runtime binaries are produced when release. One is for fusedev frontend while the other one is for virtiofs.
When nydus is working as vitriofs daemon when supply kata containers with rootfs, users have to deploy those two binaries in the same server. It brings up chaos and blur to end users.
It's better to let nydus be configure to work as FUSE daemon or virtiofs daemon.
Reference: containerd/nydus-snapshotter#5
Fixes clippy errors.
Nydus-snapshotter is being migrated from this repo to containerd organization.
Right now, nydus-snapshotter depends on nydusify for using those image annotations.
Those annotations' names is prefixed with containerd.io/snapshot
, which will all be passed to snapshotter by containerd.
So better we move those annotations' definition to snapshotter
For now View doesn’t work as expected.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.