Code Monkey home page Code Monkey logo

aws-nitro-enclaves-cli's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-nitro-enclaves-cli's Issues

Build enclave image without Docker daemon from local docker archive

What

Right now, you need a running Docker daemon to build an enclave image. The LinuxKit version currently included attempts to pull the image using the Docker daemon. linuxkit/linuxkit#3573 now lets LinuxKit pull the images directly without the need for a Docker daemon.

Why

As part of our enclave support, we (M10 Networks) want to be able to build enclave images entirely in a Dockerfile. We distribute all of our services through Docker images, and all of the builds are entirely performed inside of Dockerfiles. While possible to change this for Nitro, I don't think it should be necessary with a simple change.

How

To supply this functionality, more or less out of the box, all that would be required would be to update the included LinuxKit to one of the latest builds. This would then allow users to use skopeo, or a similar tool, to transfer their local image into LinuxKit's cache located at ~/.linuxkit/cache themselves.

A more user-friendly follow on would be to allow the users to simply pass in a path to their own Docker image archive. At that point, the CLI would need to copy the image into the LinuxKit cache.

As a temporary workaround for our use case, I can simply replace the LinuxKit binary in /usr/share/nitro_enclaves/blobs with my own. I think this use case is common enough that it should either be supported through easier means or documented in some way

Is blobs/linuxkit docker's linuxkit?

Where has linuxkit been derived from in this repo? If this belongs to Docker should credit be provided back to them for the authoring of that code e.g licensing requirements.

vsock_proxy packaged with release v0.1.0 is buggy

It would be great to see next release soon, since vsock_proxy packaged with release v0.1.0
has bugs related to YAML allowlist parsing (it panics even with test config located in subdir).

vsock_proxy from main branch just works.

nitro-cli run-enclave should enforce the allocated memory has space to unpack the initramfs

#188
#194

Seem to have the same root cause the allocated memory for the enclave is not enough for the kernel running inside the enclave to unpack the initramfs and user sees in the kernel logs:

  • "Failed to unpack initramfs"

Followed by:

  • Could not open /env file: No such file or directory
  • Could not open /env file: No such file or directory

The guideline to overcome this problem is to always have the enclave memory 4 times more than the size of Enclave image file(EIF).

The nitro-cli should enforce this rule and provide an intuitive error when this happens.

Failed to start enclave with unclear error: Failed to connect to specific enclave process

My start script

sed -i "s/^\s*memory_mib.*$/memory_mib: $MEMORY_MB/g" /etc/nitro_enclaves/allocator.yaml
sed -i "s/^\s*cpu_count.*$/cpu_count: $CPU_COUNT/g" /etc/nitro_enclaves/allocator.yaml
systemctl restart nitro-enclaves-allocator.service
echo "nitro-enclaves-allocator restarted"

echo "starting enclave..."
nitro-cli run-enclave --eif-path $EIF_PATH --memory $MEMORY_MB --cpu-count $CPU_COUNT --enclave-cid $CID --debug-mode
nitro-cli console --enclave-id $(nitro-cli describe-enclaves | jq -r '.[0].EnclaveId')
updating allocator: CPU_COUNT=8, MEMORY_MB=20000...
nitro-enclaves-allocator restarted
starting enclave...
Start allocating memory...
Started enclave with enclave-cid: 42, memory: 20000 MiB, cpu-ids: [1, 9, 2, 10, 3, 11, 4, 12]
{
  "EnclaveID": "i-04e1bfe628305b166-enc17aab64b3f7703d",
  "ProcessID": 7885,
  "EnclaveCID": 42,
  "NumberOfCPUs": 8,
  "CPUIDs": [
    1,
    9,
    2,
    10,
    3,
    11,
    4,
    12
  ],
  "MemoryMiB": 20000
}
[ E11 ] Socket error. This is used as an error for catching any other socket operation errors not covered by previous custom errors.

For more details, please visit https://docs.aws.amazon.com/enclaves/latest/user/cli-errors.html#E11

If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2021-07-15T18:19:14.575657217+00:00.log"
Done!
[ec2-user@ip-172-31-32-27 aws]$ cat /var/log/nitro_enclaves/err2021-07-15T18:19:14.575657217+00:00.log
  Action: Enclave Console
  Subactions:
    Failed to retrieve enclave CID
    Failed to connect to enclave process
    Failed to connect to specific enclave process: Os { code: 2, kind: NotFound, message: "No such file or directory" }
  Root error file: src/enclave_proc_comm.rs
  Root error line: 129
  Build commit: not available

unable to run-enclave from systemd scripts

I've created a bash script that can execute the run-enclave command and start the enclave. But when I put the script as the ExecStart of my systemd service, after executing the script the enclave instance will automatically exits. The script can be run on a bash shell successfully without any problem.

My systemd service file looks like

[Unit]
Description=my service systemd service.
After=nitro-enclaves-allocator.service nitro-enclaves-vsock-proxy.service

[Service]
Type=oneshot
ExecStart=/myservice/start_enclave.sh
StandardOutput=file:/myservice/service_start.log
User=root

[Install]
WantedBy=multi-user.target

my start_enclave.sh looks like

#!/bin/bash

get_enclave_id () {
local enclave_id=$(nitro-cli describe-enclaves | jq .[0].EnclaveID)
echo $enclave_id
}

get_encalve_status () {
local enclave_id=$(get_enclave_id)
if [[ "$enclave_id" == "null" ]]; then
echo "ENCLAVE_INACTIVE"
else
echo "$enclave_id"
fi
}

main () {
local enclave_status=$(get_encalve_status)
if [[ "$enclave_status" == "ENCLAVE_INACTIVE" ]]; then
nitro-cli run-enclave --cpu-count 2 --memory 2048 --eif-path /myservice/cpe-ne.eif --enclave-cid 10
else
echo "Enclave $enclave_status is running"
fi
}

Cant build from Dockerimage with `pip install PySide2`

What does this error mean? Trying to build an image containing the PySide2 pip package. Same issue with PyQt5.

Start building the Enclave Image...
Using the locally available Docker image...
Linuxkit reported an error while creating the customer ramfs: "Add init containers:
Process init image: docker.io/library/image
Add files:
  rootfs/dev
  rootfs/run
  rootfs/sys
  rootfs/var
  rootfs/proc
  rootfs/tmp
  cmd
  env
Create outputs:
fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x7f0046618812, 0x16)
	/usr/lib/go/src/runtime/panic.go:608 +0x74
runtime.sysMap(0xc098000000, 0x84000000, 0x7f00482adc98)
	/usr/lib/go/src/runtime/mem_linux.go:156 +0xc9
runtime.(*mheap).sysAlloc(0x7f0048293120, 0x84000000, 0xa00053fe98, 0x7f0042497cd0)
	/usr/lib/go/src/runtime/malloc.go:619 +0x1c9
runtime.(*mheap).grow(0x7f0048293120, 0x4118c, 0x0)
	/usr/lib/go/src/runtime/mheap.go:920 +0x44
runtime.(*mheap).allocSpanLocked(0x7f0048293120, 0x4118c, 0x7f00482adca8, 0x20302200000000)
	/usr/lib/go/src/runtime/mheap.go:848 +0x339
runtime.(*mheap).alloc_m(0x7f0048293120, 0x4118c, 0xffffffffffff0101, 0x7f0042497d88)
	/usr/lib/go/src/runtime/mheap.go:692 +0x11d
runtime.(*mheap).alloc.func1()
	/usr/lib/go/src/runtime/mheap.go:759 +0x4e
runtime.(*mheap).alloc(0x7f0048293120, 0x4118c, 0x7f0042010101, 0x7f0045682a99)
	/usr/lib/go/src/runtime/mheap.go:758 +0x8c
runtime.largeAlloc(0x82318000, 0x7f00456c0101, 0x7f00449526c0)
	/usr/lib/go/src/runtime/malloc.go:1019 +0x99
runtime.mallocgc.func1()
	/usr/lib/go/src/runtime/malloc.go:914 +0x48
runtime.systemstack(0x0)
	/usr/lib/go/src/runtime/asm_amd64.s:351 +0x63
runtime.mstart()
	/usr/lib/go/src/runtime/proc.go:1229

goroutine 1 [running]:
runtime.systemstack_switch()
	/usr/lib/go/src/runtime/asm_amd64.s:311 fp=0xc000648d20 sp=0xc000648d18 pc=0x7f00456c3530
runtime.mallocgc(0x82318000, 0x7f0046bedb60, 0xc08bcfc001, 0xc000648df8)
	/usr/lib/go/src/runtime/malloc.go:913 +0x8a8 fp=0xc000648dc0 sp=0xc000648d20 pc=0x7f0045678718
runtime.makeslice(0x7f0046bedb60, 0x82318000, 0x82318000, 0xc000648e50, 0x7f0045697463, 0x7f0047098908)
	/usr/lib/go/src/runtime/slice.go:70 +0x79 fp=0xc000648df0 sp=0xc000648dc0 pc=0x7f00456ae3d9
bytes.makeSlice(0x82318000, 0x0, 0x0, 0x0)
	/usr/lib/go/src/bytes/buffer.go:231 +0x6f fp=0xc000648e30 sp=0xc000648df0 pc=0x7f004576b38f
bytes.(*Buffer).grow(0xc00013c1c0, 0x8000, 0x0)
	/usr/lib/go/src/bytes/buffer.go:144 +0x16e fp=0xc000648e80 sp=0xc000648e30 pc=0x7f004576acde
bytes.(*Buffer).Write(0xc00013c1c0, 0xc08bcfc000, 0x8000, 0x8000, 0x8000,
0x0, 0x0)
	/usr/lib/go/src/bytes/buffer.go:174 +0xe7 fp=0xc000648eb0 sp=0xc000648e80 pc=0x7f004576afd7
github.com/linuxkit/linuxkit/src/cmd/linuxkit/pad4.Writer.Write(0x7f00470a31a0, 0xc00013c1c0, 0x0, 0xc08bcfc000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
	/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/pad4/pad4.go:16 +0x55 fp=0xc000648ef8 sp=0xc000648eb0 pc=0x7f0045c82bc5
github.com/linuxkit/linuxkit/src/cmd/linuxkit/pad4.(*Writer).Write(0xc00026c020, 0xc08bcfc000, 0x8000, 0x8000, 0xc08bcfc000, 0x8000, 0x8000)
	<autogenerated>:1 +0x76 fp=0xc000648f50 sp=0xc000648ef8 pc=0x7f0045c82fe6
github.com/linuxkit/linuxkit/src/cmd/linuxkit/vendor/github.com/surma/gocpio.(*Writer).countedWrite(0xc000443ef0, 0xc08bcfc000, 0x8000, 0x8000, 0xc000649018, 0x7f0045a6d4df, 0xc08bb9fe00)
	/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/vendor/github.com/surma/gocpio/writer.go:105 +0x53 fp=0xc000648f98 sp=0xc000648f50 pc=0x7f0045c83973
github.com/linuxkit/linuxkit/src/cmd/linuxkit/vendor/github.com/surma/gocpio.(*Writer).Write(0xc000443ef0, 0xc08bcfc000, 0x8000, 0x8000, 0x8000, 0x0, 0x0)
	/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/vendor/github.com/surma/gocpio/writer.go:99 +0x5d fp=0xc000648fe0 sp=0xc000648f98 pc=0x7f0045c838cd
github.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd.(*Writer).Write(0xc0001a8270, 0xc08bcfc000, 0x8000, 0x8000, 0x8000, 0x0, 0x0)
	/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd/initrd.go:168 +0x4f fp=0xc000649028 sp=0xc000648fe0 pc=0x7f0045c84b3f
io.copyBuffer(0x7f00470a3560, 0xc0001a8270, 0x7f00470a3080, 0xc0000c6480, 0xc08bcfc000, 0x8000, 0x8000, 0xc000649150, 0x0, 0x0)
	/usr/lib/go/src/io/io.go:404 +0x203 fp=0xc000649098 sp=0xc000649028 pc=0x7f00456cfce3
io.Copy(0x7f00470a3560, 0xc0001a8270, 0x7f00470a3080, 0xc0000c6480, 0x0, 0x0, 0x2)
	/usr/lib/go/src/io/io.go:364 +0x5c fp=0xc0006490f8 sp=0xc000649098 pc=0x7f00456cf9ac
github.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd.copyTarEntry(0xc0001a8270, 0xc08bce2620, 0x7f00470a3080, 0xc0000c6480, 0x13, 0x0, 0x0)
	/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd/initrd.go:87 +0x28d fp=0xc0006491b0 sp=0xc0006490f8 pc=0x7f0045c841ed
github.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd.CopySplitTar(0xc0001a8270, 0xc0000c6480, 0x1, 0x7f004268ac90, 0x7f00456c09b0, 0xc0006492b0, 0xc0006492c0, 0x10, 0x7f0046ce9700, 0x7f0046ce9700, ...)
	/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd/initrd.go:144 +0x1d4 fp=0xc000649250 sp=0xc0006491b0 pc=0x7f0045c84484
github.com/linuxkit/linuxkit/src/cmd/linuxkit/moby.tarToInitrd(0x7f00470a6760, 0xc0000b2030, 0x7f0045736a31, 0xc000230310, 0xc000000300, 0xc0000ba1e0, 0x7f004567e473, 0x7f0046603332, 0x7fffa404b65e, 0xd, ...)
	/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/moby/output.go:272 +0x1cb fp=0xc000649340 sp=0xc000649250 pc=0x7f0045c9bfbb
github.com/linuxkit/linuxkit/src/cmd/linuxkit/moby.glob..func2(0xc0000ac360,
0x2d, 0x7f00470a6760, 0xc0000b2030, 0x400, 0x1, 0xc000649460, 0x7f00457612a1)
	/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/moby/output.go:48 +0x4f fp=0xc0006493e8 sp=0xc000649340 pc=0x7f0045ca16ef
github.com/linuxkit/linuxkit/src/cmd/linuxkit/moby.Formats(0xc0000ac360, 0x2d, 0xc000230310, 0xe, 0xc00009b380, 0x1, 0x1, 0x400, 0x1, 0x0, ...)
	/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/moby/output.go:260 +0x29a fp=0xc0006494a8 sp=0xc0006493e8 pc=0x7f0045c9bd0a
main.build(0xc0000b8020, 0x7, 0x7)
	/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/build.go:220 +0x116c fp=0xc000649f00 sp=0xc0006494a8 pc=0x7f00465a936c
main.main()
	/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/main.go:121 +0x417 fp=0xc000649f98 sp=0xc000649f00 pc=0x7f00465af127
runtime.main()
	/usr/lib/go/src/runtime/proc.go:201 +0x20f fp=0xc000649fe0 sp=0xc000649f98 pc=0x7f004569982f
runtime.goexit()
	/usr/lib/go/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc000649fe8 sp=0xc000649fe0 pc=0x7f00456c5641

goroutine 19 [syscall]:
os/signal.signal_recv(0x0)
	/usr/lib/go/src/runtime/sigqueue.go:139 +0x9e
os/signal.loop()
	/usr/lib/go/src/os/signal/signal_unix.go:23 +0x24
created by os/signal.init.0
	/usr/lib/go/src/os/signal/signal_unix.go:29 +0x43

goroutine 9 [IO wait]:
internal/poll.runtime_pollWait(0x7f004268af00, 0x72, 0xc000064a88)
	/usr/lib/go/src/runtime/netpoll.go:173 +0x68
internal/poll.(*pollDesc).wait(0xc000260418, 0x72, 0xffffffffffffff00, 0x7f00470a8fe0, 0x7f0048237ac0)
	/usr/lib/go/src/internal/poll/fd_poll_runtime.go:85 +0x9c
internal/poll.(*pollDesc).waitRead(0xc000260418, 0xc0003bc000, 0x1000, 0x1000)
	/usr/lib/go/src/internal/poll/fd_poll_runtime.go:90 +0x3f
internal/poll.(*FD).Read(0xc000260400, 0xc0003bc000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/lib/go/src/internal/poll/fd_unix.go:169 +0x17b
net.(*netFD).Read(0xc000260400, 0xc0003bc000, 0x1000, 0x1000, 0xc000064b70, 0x7f00456c21f0, 0xc000000300)
	/usr/lib/go/src/net/fd_unix.go:202 +0x51
net.(*conn).Read(0xc0000b2a20, 0xc0003bc000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/net.go:177 +0x6a
net/http.(*persistConn).Read(0xc000264360, 0xc0003bc000, 0x1000, 0x1000, 0xc000064c70, 0x7f0045671335, 0xc00008a4e0)
	/usr/lib/go/src/net/http/transport.go:1497 +0x77
bufio.(*Reader).fill(0xc00026fce0)
	/usr/lib/go/src/bufio/bufio.go:100 +0x111
bufio.(*Reader).Peek(0xc00026fce0, 0x1, 0x0, 0x0, 0x1, 0xc00008a420, 0x0)
	/usr/lib/go/src/bufio/bufio.go:132 +0x41
net/http.(*persistConn).readLoop(0xc000264360)
	/usr/lib/go/src/net/http/transport.go:1645 +0x1a4
created by net/http.(*Transport).dialConn
	/usr/lib/go/src/net/http/transport.go:1338 +0x943

goroutine 10 [select]:
net/http.(*persistConn).writeLoop(0xc000264360)
	/usr/lib/go/src/net/http/transport.go:1885 +0x115
created by net/http.(*Transport).dialConn
	/usr/lib/go/src/net/http/transport.go:1339 +0x968

goroutine 23 [IO wait]:
internal/poll.runtime_pollWait(0x7f004268ae30, 0x72, 0xc0000d1a88)
	/usr/lib/go/src/runtime/netpoll.go:173 +0x68
internal/poll.(*pollDesc).wait(0xc000248298, 0x72, 0xffffffffffffff00, 0x7f00470a8fe0, 0x7f0048237ac0)
	/usr/lib/go/src/internal/poll/fd_poll_runtime.go:85 +0x9c
internal/poll.(*pollDesc).waitRead(0xc000248298, 0xc000018000, 0x1000, 0x1000)
	/usr/lib/go/src/internal/poll/fd_poll_runtime.go:90 +0x3f
internal/poll.(*FD).Read(0xc000248280, 0xc000018000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/lib/go/src/internal/poll/fd_unix.go:169 +0x17b
net.(*netFD).Read(0xc000248280, 0xc000018000, 0x1000, 0x1000, 0xc0000d1b70, 0x7f00456c21f0, 0xc000000300)
	/usr/lib/go/src/net/fd_unix.go:202 +0x51
net.(*conn).Read(0xc00000e050, 0xc000018000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/net.go:177 +0x6a
net/http.(*persistConn).Read(0xc00024a120, 0xc000018000, 0x1000, 0x1000, 0xc0000d1c70, 0x7f0045671335, 0xc0003b64e0)
	/usr/lib/go/src/net/http/transport.go:1497 +0x77
bufio.(*Reader).fill(0xc000080360)
	/usr/lib/go/src/bufio/bufio.go:100 +0x111
bufio.(*Reader).Peek(0xc000080360, 0x1, 0x0, 0x0, 0x1, 0xc0003b6420, 0x0)
	/usr/lib/go/src/bufio/bufio.go:132 +0x41
net/http.(*persistConn).readLoop(0xc00024a120)
	/usr/lib/go/src/net/http/transport.go:1645 +0x1a4
created by net/http.(*Transport).dialConn
	/usr/lib/go/src/net/http/transport.go:1338 +0x943

goroutine 24 [select]:
net/http.(*persistConn).writeLoop(0xc00024a120)
	/usr/lib/go/src/net/http/transport.go:1885 +0x115
created by net/http.(*Transport).dialConn
	/usr/lib/go/src/net/http/transport.go:1339 +0x968

goroutine 26 [IO wait]:
internal/poll.runtime_pollWait(0x7f004268ad60, 0x72, 0xc0000cda88)
	/usr/lib/go/src/runtime/netpoll.go:173 +0x68
internal/poll.(*pollDesc).wait(0xc00033a318, 0x72, 0xffffffffffffff00, 0x7f00470a8fe0, 0x7f0048237ac0)
	/usr/lib/go/src/internal/poll/fd_poll_runtime.go:85 +0x9c
internal/poll.(*pollDesc).waitRead(0xc00033a318, 0xc0003a4000, 0x1000, 0x1000)
	/usr/lib/go/src/internal/poll/fd_poll_runtime.go:90 +0x3f
internal/poll.(*FD).Read(0xc00033a300, 0xc0003a4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/lib/go/src/internal/poll/fd_unix.go:169 +0x17b
net.(*netFD).Read(0xc00033a300, 0xc0003a4000, 0x1000, 0x1000, 0x0, 0x0, 0x7f00465f8010)
	/usr/lib/go/src/net/fd_unix.go:202 +0x51
net.(*conn).Read(0xc00000e070, 0xc0003a4000, 0x1000, 0x1000, 0x0,
0x0, 0x0)
	/usr/lib/go/src/net/net.go:177 +0x6a
net/http.(*persistConn).Read(0xc000480120, 0xc0003a4000, 0x1000, 0x1000, 0xc000480000, 0xc000480120, 0x0)
	/usr/lib/go/src/net/http/transport.go:1497 +0x77
bufio.(*Reader).fill(0xc0000b4480)
	/usr/lib/go/src/bufio/bufio.go:100 +0x111
bufio.(*Reader).Peek(0xc0000b4480, 0x1, 0x2, 0x0, 0x0, 0xc00008acc0, 0x0)
	/usr/lib/go/src/bufio/bufio.go:132 +0x41
net/http.(*persistConn).readLoop(0xc000480120)
	/usr/lib/go/src/net/http/transport.go:1645 +0x1a4
created by net/http.(*Transport).dialConn
	/usr/lib/go/src/net/http/transport.go:1338 +0x943

goroutine 27 [select]:
net/http.(*persistConn).writeLoop(0xc000480120)
	/usr/lib/go/src/net/http/transport.go:1885 +0x115
created by net/http.(*Transport).dialConn
	/usr/lib/go/src/net/http/transport.go:1339 +0x968
"
[ E48 ] EIF building error. Such error appears when trying to build an EIF file. In this case, the error backtrace provides detailed information on the failure reason.

outdated comments in nitro-enclaves-allocator

My copy of nitro-enclaves-allocator has outdated comments at the beginning of the file.

I think /etc/ne_mem etc. are from the early days, the configuration moved to /etc/nitro_enclaves/allocator.yaml

Cannot run hello-world docker image in enclave throws error

Steps

$ docker images
REPOSITORY            TAG       IMAGE ID       CREATED        SIZE
vsock-sample-server   latest    f569077e6b95   2 days ago     10.1MB
alpine                latest    0ac33e5f5afa   5 days ago     5.57MB
hello-world           latest    feb5d9fea6a5   6 months ago   13.3kB

$ nitro-cli build-enclave --docker-uri hello-world:latest --output-file hello2.eif
Start building the Enclave Image...
Using the locally available Docker image...
Enclave Image successfully created.
{
  "Measurements": {
    "HashAlgorithm": "Sha384 { ... }",
    "PCR0": "297caf40f22a444b7ac98e8ee68768b5c86480559bedab11ebfda0feebe8453ec7cc966d862920fb553857e0b1b554ff",
    "PCR1": "bcdf05fefccaa8e55bf2c8d6dee9e79bbff31e34bf28a99aa19e6b29c37ee80b214a414b7607236edf26fcb78654e63f",
    "PCR2": "5a10913645be73e780072fe38b25b386b0f6e1bbfc35ec076e5e10d21f14d8a5fc409316b0eebd6bb8d3c7c0b1abed15"
  }
}
$ nitro-cli run-enclave --cpu-count 2 --memory 512 --eif-path hello2.eif 
[ E19 ] File operation failure. Such error appears when the system fails to perform the requested file operations, such as opening the EIF file when launching an enclave, or seeking to a specific offset in the EIF file, or writing to the log file.
File: '/sys/module/nitro_enclaves/parameters/ne_cpus', failing operation: 'Open'.

For more details, please visit https://docs.aws.amazon.com/enclaves/latest/user/cli-errors.html#E19

If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2022-04-10T22:05:04.445699306+00:00.log"
Failed connections: 1
[ E39 ] Enclave process connection failure. Such error appears when the enclave manager fails to connect to at least one enclave process for retrieving the description information.

For more details, please visit https://docs.aws.amazon.com/enclaves/latest/user/cli-errors.html#E39

If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2022-04-10T22:05:04.445822326+00:00.log"

Error Log

  Action: Run Enclave
  Subactions:
    Failed to execute command `Run`
    Failed to trigger enclave run
    Failed to construct CPU information
    Failed to open CPU pool file: No such file or directory (os error 2)
  Root error file: src/enclave_proc/cpu_info.rs
  Root error line: 164
  Build commit: not available


Enclave CLI Version

Installed Packages
aws-nitro-enclaves-cli.x86_64                    1.2.0-0.amzn2                    @amzn2extra-aws-nitro-enclaves-cli

`udevadm trigger -y nitro_enclaves` does not appear to apply udev rules properly when run from userdata during node provisioning

I am adding Nitro enclave nodes to my EKS cluster and overall it is working well, but I do have an issue that I have had to work around.

I am spinning up an EKS cluster using:

I have one node that is Enclave Enabled and in the userdata for that node I have the following steps (among others):

amazon-linux-extras install aws-nitro-enclaves-cli -y
yum install aws-nitro-enclaves-cli-devel -y
usermod -aG ne ec2-user
usermod -aG docker ec2-user
systemctl start nitro-enclaves-allocator.service && systemctl enable nitro-enclaves-allocator.service
systemctl start docker  && systemctl enable docker

When the system is done provisioning, if I login as ec2-user, I am unable to run an enclave, unless I reboot the node first.

After some help from @andraprs, we determined that the issue appears to be related to:
https://github.com/aws/aws-nitro-enclaves-cli/blob/main/SPECS/aws-nitro-enclaves-cli.spec#L114

From what I can tell this line doesn't do anything in this scenario. Maybe because services are still starting?

The initial fix I discovered was to simply add chgrp ne /dev/nitro_enclaves to the bottom of my userdata script.

Running udevadm trigger -y nitro_enclaves from the RPM with an additional readiness and success check might help here.

@andraprs and I have more detailed logs, etc, but I think that this should cover the important details.

Fail to read "/root/.aws/config"

I launched a bare AWS Nitro image from AWS and fully followed the document. But I got stuck when I tried

docker run --network host -it kmstool-instance \
    /kmstool_instance --cid "$ENCLAVE_CID" "$CIPHERTEXT"

I got the error message


[ERROR] [2020-12-17T12:07:35Z] [00007f76c19d7880] [file-utils] - static: Failed to open file /root/.aws/config with errno 2
[WARN] [2020-12-17T12:07:35Z] [00007f76c19d7880] [AuthProfile] - Failed to read file at "/root/.aws/config"
[DEBUG] [2020-12-17T12:07:35Z] [00007f76c19d7880] [AuthProfile] - Creating profile collection from file at "/root/.aws/credentials"
[ERROR] [2020-12-17T12:07:35Z] [00007f76c19d7880] [file-utils] - static: Failed to open file /root/.aws/credentials with errno 2
[WARN] [2020-12-17T12:07:35Z] [00007f76c19d7880] [AuthProfile] - Failed to read file at "/root/.aws/credentials"
[ERROR] [2020-12-17T12:07:35Z] [00007f76c19d7880] [AuthCredentialsProvider] - static: Profile credentials parser could not load or parse a credentials or config file.
...

I wonder why it must get credentials from root folder. Should it run with root privilege? Why not leave the interface out to get credentials from the user?

And how can I resolve it now?

Increase enclave disk space

When starting enclaves on my ec2 machine (m5.2xlarge) i noticed that the enclave always has 4.03GB of disk space, which is not enough for my code and the data i want to handle.

Output of df run inside an enclave only containing the latest ubuntu:

Filesystem     1K-blocks   Used Available Use% Mounted on
dev              4031892      0   4031892   0% /dev
tmpfs            4083476      0   4083476   0% /run
tmpfs            4083476      0   4083476   0% /tmp
shm              4083476      0   4083476   0% /dev/shm
cgroup_root      4083476      0   4083476   0% /sys/fs/cgroup

Is there any way to increase the amount of disk space allocated to the enclave?

Early kernel boot debugging?

Hi, we would like to use a custom built kernel for our enclaves, however we're bumping into issues during boot, it seems init doesn't reach the enclave_ready() call, and the nitro-cli run-enclave doesn't return as it does for enclaves built with the official kernel. Consequently we cannot even attach the console to the enclave to look at kernel logs.

One of the reasons we'd like to use our own kernel is to strip down the kernel config, and most probably our kernel is too minimal at this point for nitro usage. We have tested our kernel+initrd with qemu and it works well, so there must be something nitro-specific that's failing (we have configured MMIO-based virtio as well).

We are unsure how to proceed with the debugging, aside from slowly enabling all kernel features that the official nitro kernel has (although we're using a 5.16 kernel, so there are some inherent differences). Is there any way we can attach to the serial without the nitro-cli console invocation so we can see kernel messages before the enclave_ready() heartbeat?

I have attached our kernel config just in case (this is not the fully stripped down version, contains debugging features): config.txt

Maximum resources possible for allocation

I have searched through the documentation and made some experiments.
Yet i have not figured out what the maximum amount of vCPUs and memory i could allocate to an enclave is.

Have you done these test at some point and reached a value or is it more like above some point behaviour becomes "undefined"?

For example:
An r5.24xlarge ec2 instance boasts 96vCPUs and 728GB RAM, so i was wondering how many of those resources i could realistically use in an enclave.

Nitro-cli console reports error after successful enclave exit

I am trying to run an enclave in debug mode with attached console using nitro-cli console or nitro-cli run-enclave --attach-console. Every time my enclave exits I get this at the end:

[   60.505028] Unregister pv shared memory for cpu 1
[   60.506120] Unregister pv shared memory for cpu 0
[   60.507034] reboot: Restarting system
[   60.507642] reboot: machine restart
[ E45 ] Enclave console read error. Such error appears when reading from a running enclave's console fails.

For more details, please visit https://docs.aws.amazon.com/enclaves/latest/user/cli-errors.html#E45

If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2022-08-25T13:18:36.623632872+00:00.log".

My enclave is a simple bash script that runs on Ubuntu and only does sleep before exiting, so I am pretty sure that enclave exits successfully. The error itself doesn't interfere with the application, but adds confusion for the user. Is this a proper behavior for the console or it could be fixed?

My Dockerfile and script look like this:

FROM ubuntu
COPY start.sh /
CMD ./start.sh

sleep 60s

nitro-cli console should report if debug-flag is not present

In the case where debug-mode is not passed when running the enclave the nitro-cli console won't be able to connect to it.

However, the cli has all the information it needs to report a proper error if that flag is not present instead of reporting that it could not connect to the enclave console which could happen from multiple reasons.

`make nitro-cli` fails on Ubuntu 18.04

When following the build instructions, I get an error.
I've isolated it to the .build_container target, and the error I get when I run make nitro-cli:

Step 23/27 : RUN  source $HOME/.cargo/env &&    cargo install cargo-audit --version ^0.12
 ---> Running in 02e0cbf2c49d
    Updating crates.io index
 Downloading crates ...
  Downloaded cargo-audit v0.12.1
  Installing cargo-audit v0.12.1
 Downloading crates ...
  Downloaded thiserror v1.0.23
  Downloaded rustsec v0.20.1
  Downloaded gumdrop v0.7.0
  Downloaded lazy_static v1.4.0
  Downloaded serde v1.0.123
  Downloaded home v0.5.3
  Downloaded smol_str v0.1.16
  Downloaded abscissa_core v0.5.2
  Downloaded serde_json v1.0.61
  Downloaded gumdrop_derive v0.7.0
  Downloaded chrono v0.4.19
  Downloaded thiserror-impl v1.0.23
  Downloaded serde_derive v1.0.123
  Downloaded toml v0.5.8
  Downloaded semver v0.9.0
  Downloaded cvss v1.0.1
  Downloaded platforms v0.2.1
  Downloaded semver-parser v0.9.0
  Downloaded cargo-lock v4.0.1
  Downloaded crates-index v0.14.3
  Downloaded git2 v0.13.17
  Downloaded num-traits v0.2.14
  Downloaded once_cell v1.5.2
  Downloaded canonical-path v2.0.2
  Downloaded signal-hook v0.1.17
  Downloaded secrecy v0.6.0
  Downloaded backtrace v0.3.56
  Downloaded libc v0.2.83
  Downloaded tracing v0.1.22
  Downloaded regex v1.4.3
  Downloaded semver-parser v0.7.0
  Downloaded ryu v1.0.5
  Downloaded termcolor v1.1.2
  Downloaded time v0.1.44
  Downloaded quote v1.0.8
  Downloaded proc-macro2 v1.0.24
  Downloaded syn v1.0.60
  Downloaded tracing-log v0.1.1
  Downloaded color-backtrace v0.3.0
  Downloaded itoa v0.4.7
  Downloaded tracing-subscriber v0.1.6
  Downloaded num-integer v0.1.44
  Downloaded wait-timeout v0.2.0
  Downloaded generational-arena v0.2.8
  Downloaded abscissa_derive v0.5.0
  Downloaded object v0.23.0
  Downloaded autocfg v1.0.1
  Downloaded zeroize v1.2.0
  Downloaded memchr v2.3.4
  Downloaded tracing-core v0.1.17
  Downloaded matchers v0.0.1
  Downloaded ansi_term v0.11.0
  Downloaded tracing-attributes v0.1.11
  Downloaded rustc-demangle v0.1.18
  Downloaded miniz_oxide v0.4.3
  Downloaded regex-syntax v0.6.22
  Downloaded pin-project-lite v0.2.4
  Downloaded error-chain v0.12.4
  Downloaded petgraph v0.5.1
  Downloaded owning_ref v0.4.1
  Downloaded glob v0.3.0
  Downloaded cfg-if v1.0.0
  Downloaded addr2line v0.14.1
  Downloaded signal-hook-registry v1.3.0
  Downloaded bitflags v1.2.1
  Downloaded libgit2-sys v0.12.18+1.1.0
  Downloaded log v0.4.14
  Downloaded smallvec v0.6.14
  Downloaded aho-corasick v0.7.15
  Downloaded thread_local v1.1.2
  Downloaded atty v0.2.14
  Downloaded unicode-xid v0.2.1
  Downloaded openssl-probe v0.1.2
  Downloaded openssl-sys v0.9.60
  Downloaded url v2.2.0
  Downloaded ident_case v1.0.1
  Downloaded stable_deref_trait v1.2.0
  Downloaded regex-automata v0.1.9
  Downloaded cc v1.0.66
  Downloaded version_check v0.9.2
  Downloaded cfg-if v0.1.10
  Downloaded gimli v0.23.0
  Downloaded pkg-config v0.3.19
  Downloaded libssh2-sys v0.2.20
  Downloaded indexmap v1.6.1
  Downloaded libz-sys v1.1.2
  Downloaded maybe-uninit v2.0.0
  Downloaded fixedbitset v0.2.0
  Downloaded adler v0.2.3
  Downloaded darling v0.10.2
  Downloaded synstructure v0.12.4
  Downloaded percent-encoding v2.1.0
  Downloaded hashbrown v0.9.1
  Downloaded form_urlencoded v1.0.0
  Downloaded idna v0.2.0
  Downloaded jobserver v0.1.21
  Downloaded matches v0.1.8
  Downloaded darling_core v0.10.2
  Downloaded darling_macro v0.10.2
  Downloaded unicode-bidi v0.3.4
  Downloaded unicode-normalization v0.1.16
  Downloaded fnv v1.0.7
  Downloaded strsim v0.9.3
  Downloaded byteorder v1.4.2
  Downloaded tinyvec v1.1.1
  Downloaded tinyvec_macros v0.1.0
   Compiling proc-macro2 v1.0.24
   Compiling autocfg v1.0.1
   Compiling libc v0.2.83
   Compiling unicode-xid v0.2.1
   Compiling syn v1.0.60
   Compiling serde_derive v1.0.123
   Compiling serde v1.0.123
   Compiling pkg-config v0.3.19
   Compiling cfg-if v1.0.0
   Compiling log v0.4.14
   Compiling matches v0.1.8
   Compiling tinyvec_macros v0.1.0
   Compiling gimli v0.23.0
   Compiling adler v0.2.3
   Compiling memchr v2.3.4
   Compiling object v0.23.0
   Compiling rustc-demangle v0.1.18
   Compiling percent-encoding v2.1.0
error[E0658]: subslice patterns are unstable
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/object-0.23.0/src/read/mod.rs:162:41
    |
162 |             [0x7f, b'E', b'L', b'F', 1, ..] => FileKind::Elf32,
    |                                         ^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/62254

error[E0658]: subslice patterns are unstable
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/object-0.23.0/src/read/mod.rs:164:41
    |
164 |             [0x7f, b'E', b'L', b'F', 2, ..] => FileKind::Elf64,
    |                                         ^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/62254

error[E0658]: subslice patterns are unstable
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/object-0.23.0/src/read/mod.rs:166:38
    |
166 |             [0xfe, 0xed, 0xfa, 0xce, ..]
    |                                      ^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/62254

error[E0658]: subslice patterns are unstable
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/object-0.23.0/src/read/mod.rs:167:40
    |
167 |             | [0xce, 0xfa, 0xed, 0xfe, ..] => FileKind::MachO32,
    |                                        ^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/62254

error[E0658]: subslice patterns are unstable
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/object-0.23.0/src/read/mod.rs:169:40
    |
169 |             | [0xfe, 0xed, 0xfa, 0xcf, ..]
    |                                        ^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/62254

error[E0658]: subslice patterns are unstable
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/object-0.23.0/src/read/mod.rs:170:40
    |
170 |             | [0xcf, 0xfa, 0xed, 0xfe, ..] => FileKind::MachO64,
    |                                        ^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/62254

error[E0658]: subslice patterns are unstable
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/object-0.23.0/src/read/mod.rs:172:38
    |
172 |             [0xca, 0xfe, 0xba, 0xbe, ..] => FileKind::MachOFat32,
    |                                      ^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/62254

error[E0658]: subslice patterns are unstable
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/object-0.23.0/src/read/mod.rs:174:38
    |
174 |             [0xca, 0xfe, 0xba, 0xbf, ..] => FileKind::MachOFat64,
    |                                      ^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/62254

error[E0658]: subslice patterns are unstable
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/object-0.23.0/src/read/mod.rs:178:26
    |
178 |             [b'M', b'Z', ..] => {
    |                          ^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/62254

error[E0658]: subslice patterns are unstable
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/object-0.23.0/src/read/mod.rs:194:26
    |
194 |             [0x4c, 0x01, ..]
    |                          ^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/62254

error[E0658]: subslice patterns are unstable
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/object-0.23.0/src/read/mod.rs:196:28
    |
196 |             | [0x64, 0x86, ..] => FileKind::Coff,
    |                            ^^
    |
    = note: for more information, see https://github.com/rust-lang/rust/issues/62254

   Compiling strsim v0.9.3
   Compiling regex-syntax v0.6.22
   Compiling ryu v1.0.5
   Compiling version_check v0.9.2
   Compiling ident_case v1.0.1
   Compiling fnv v1.0.7
   Compiling bitflags v1.2.1
   Compiling maybe-uninit v2.0.0
   Compiling lazy_static v1.4.0
   Compiling hashbrown v0.9.1
   Compiling once_cell v1.5.2
   Compiling serde_json v1.0.61
   Compiling byteorder v1.4.2
   Compiling semver-parser v0.7.0
   Compiling openssl-probe v0.1.2
   Compiling stable_deref_trait v1.2.0
   Compiling fixedbitset v0.2.0
   Compiling itoa v0.4.7
   Compiling pin-project-lite v0.2.4
   Compiling ansi_term v0.11.0
   Compiling glob v0.3.0
   Compiling termcolor v1.1.2
   Compiling zeroize v1.2.0
   Compiling home v0.5.3
   Compiling cfg-if v0.1.10
   Compiling canonical-path v2.0.2
   Compiling semver-parser v0.9.0
   Compiling tinyvec v1.1.1
   Compiling unicode-bidi v0.3.4
   Compiling form_urlencoded v1.0.0
   Compiling tracing-core v0.1.17
   Compiling thread_local v1.1.2
   Compiling miniz_oxide v0.4.3
   Compiling num-traits v0.2.14
   Compiling num-integer v0.1.44
   Compiling indexmap v1.6.1
   Compiling owning_ref v0.4.1
   Compiling generational-arena v0.2.8
   Compiling error-chain v0.12.4
   Compiling regex-automata v0.1.9
   Compiling addr2line v0.14.1
error: aborting due to 11 previous errors

For more information about this error, try `rustc --explain E0658`.
error: could not compile `object`.
warning: build failed, waiting for other jobs to finish...
error: failed to compile `cargo-audit v0.12.1`, intermediate artifacts can be found at `/tmp/cargo-installwwGlHn`

Caused by:
  build failed
The command '/bin/bash -c source $HOME/.cargo/env &&    cargo install cargo-audit --version ^0.12' returned a non-zero code: 101
Makefile:92: recipe for target '.build-container' failed
make: *** [.build-container] Error 101

It appears that the software is using an unstable feature.
For convenience, I've included the rustc --explain output suggested above:

An unstable feature was used.

Erroneous code example:

#[repr(u128)] // error: use of unstable library feature 'repr128'
enum Foo {
    Bar(u64),
}

If you're using a stable or a beta version of rustc, you won't be able to use
any unstable features. In order to do so, please switch to a nightly version of
rustc (by using rustup).

If you're using a nightly version of rustc, just add the corresponding feature
to be able to use it:

#![feature(repr128)]

#[repr(u128)] // ok!
enum Foo {
    Bar(u64),
}

It seems like the version of cargo-audit being installed is not compatible with the Rust version you have specified in tools/Dockerfile1804, which is currently set to 1.41.1. When I change the following line:

ENV RUST_VERSION=1.41.1

to

ENV RUST_VERSION=1.49

The problem goes away. I am, of course, not certain that this is the correct way to solve this.

Digging deeper, it appears that (either directly or indirectly), cargo-audit depends on the object crate, which was updated between version 0.22 and 0.23 to use the feature that is unstable on Rust version 1.41.1.

Another possible way to solve this would be to be more specific in the version of the cargo-audit that is being installed. Currently, --version ^0.12 is being used. Perhaps being more specific would also solve the problem.

Enclave hangs up if Dockerfile CMD has a relative path

I have a dockerfile with a relative path in the CMD:

CMD ["python3" , "./ubuntu-python-server/server.py"]

or

WORKDIR /home
CMD ["python3" , "ubuntu-python-server/server.py"]

An enclave created using enclave-run command is created and terminated immediately due to (a possible) missing socket connection. /run/nitro_enclaves/ has no .sock file.

The complete log is as follows:

[nitro-cli:28204][INFO][2022-06-22T06:28:57.279Z][src/main.rs:72] Start Nitro CLI
[nitro-cli:28204][INFO][2022-06-22T06:28:57.279Z][src/main.rs:115] Sent command: Run
[enc-xxxxxxx:28206][INFO][2022-06-22T06:28:57.280Z][src/enclave_proc/mod.rs:571] Enclave process PID: 28206
[enc-xxxxxxx:28206][INFO][2022-06-22T06:28:57.280Z][src/enclave_proc/mod.rs:479] Received command: Run
[enc-xxxxxxx:28206][INFO][2022-06-22T06:28:57.280Z][src/enclave_proc/mod.rs:272] Run args = RunEnclavesArgs { eif_path: "./d3.eif", enclave_cid: Some(17), memory_mib: 3072, cpu_ids: None, debug_mode: Some(true), attach_console: false, cpu_count: Some(2), enclave_name: Some("d3_error") }
[enc-xxxxxxx:28206][INFO][2022-06-22T06:28:57.280Z][src/enclave_proc/resource_manager.rs:371] Allocating memory regions to hold 3221225472 bytes.
[enc-xxxxxxx:28206][INFO][2022-06-22T06:28:57.281Z][src/enclave_proc/resource_manager.rs:453] Allocated 3 region(s): 3 page(s) of 1024 MB
[enc-xxxxxxx:28206][INFO][2022-06-22T06:28:58.019Z][src/enclave_proc/resource_manager.rs:693] Finished initializing memory.
[enc-xxxxxxx:28206][INFO][2022-06-22T06:29:02.956Z][src/enclave_proc/mod.rs:281] Enclave ID = i-0dca5a2cb0a6e6ffc-enc1818a1985367667
[enc-1818a1985367667:28206][WARN][2022-06-22T06:29:03.556Z][src/enclave_proc/mod.rs:207] Received hang-up event from the enclave. Enclave process will shut down.
[enc-1818a1985367667:28206][INFO][2022-06-22T06:29:03.556Z][src/enclave_proc/mod.rs:541] Enclave process 28206 exited event loop.
[enc-1818a1985367667:28206][INFO][2022-06-22T06:29:03.558Z][src/enclave_proc/resource_manager.rs:762] Enclave terminated.
[nitro-cli:28211][INFO][2022-06-22T06:29:15.579Z][src/main.rs:72] Start Nitro CLI
[nitro-cli:28211][INFO][2022-06-22T06:29:15.579Z][src/main.rs:211] Sent command: Describe

It succeeds if I use an absolute path in the dockerfile CMD:

CMD ["python3" , "/home/ubuntu-python-server/server.py"]

Recreating the error:

Dockerfile:

# Fetch ubuntu
FROM ubuntu:bionic

WORKDIR /home

COPY server.py /home/server.py

# Get packages
RUN apt-get update
RUN apt-get install python3 -y
RUN apt-get install -f -y

CMD ["python3" , "./server.py"]

server.py:

# // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# // SPDX-License-Identifier: MIT-0

import time

def main():
    count = 1
    while True:
        print(f"[{count:4d}] Hello from the enclave side!")
        count += 1
        time.sleep(5)

if __name__ == '__main__':
    main()

Build image docker build ./ -t d3_error
Build enclave image nitro-cli build-enclave --docker-uri d3_error:latest --output-file ./d3_error.eif
Run enclave: nitro-cli run-enclave --cpu-count 2 --memory 1024 --eif-path ./d3_error.eif --debug-mode --enclave-cid 17
Describe enclaves nitro-cli describe-enclaves returns []

Just to add, docker run succeeds docker run -i -t --name d3_error_c d3_error:latest

Error running nitro enclave allocator

Error message:

[ec2-user@ip-172-16-1-154 ~]$ sudo systemctl status nitro-enclaves-allocator.service
● nitro-enclaves-allocator.service - Nitro Enclaves Resource Allocator
   Loaded: loaded (/usr/lib/systemd/system/nitro-enclaves-allocator.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Fri 2021-04-16 17:26:37 UTC; 2s ago
  Process: 3569 ExecStart=/usr/bin/nitro-enclaves-allocator (code=exited, status=1/FAILURE)
 Main PID: 3569 (code=exited, status=1/FAILURE)

Apr 16 17:26:37 ip-172-16-1-154.ec2.internal systemd[1]: Starting Nitro Enclaves Resource Allocator...
Apr 16 17:26:37 ip-172-16-1-154.ec2.internal nitro-enclaves-allocator[3569]: /usr/bin/nitro-enclaves-allocator: line 130: /sys/module/nitro_enclaves/parameters/ne_cpus: No such file or directory
Apr 16 17:26:37 ip-172-16-1-154.ec2.internal nitro-enclaves-allocator[3569]: cat: .tmp_file: No such file or directory
Apr 16 17:26:37 ip-172-16-1-154.ec2.internal nitro-enclaves-allocator[3569]: rm: cannot remove '.tmp_file': No such file or directory
Apr 16 17:26:37 ip-172-16-1-154.ec2.internal nitro-enclaves-allocator[3569]: Error: The CPU pool file is missing. Please make sure the Nitro Enclaves driver is inserted.
Apr 16 17:26:37 ip-172-16-1-154.ec2.internal systemd[1]: nitro-enclaves-allocator.service: main process exited, code=exited, status=1/FAILURE
Apr 16 17:26:37 ip-172-16-1-154.ec2.internal systemd[1]: Failed to start Nitro Enclaves Resource Allocator.
Apr 16 17:26:37 ip-172-16-1-154.ec2.internal systemd[1]: Unit nitro-enclaves-allocator.service entered failed state.
Apr 16 17:26:37 ip-172-16-1-154.ec2.internal systemd[1]: nitro-enclaves-allocator.service failed.

Running on m5.xlarge Amazon Linux.

Signing enclave images with HSM/KSM

Currently the only option to sign an enclave image is to pass the private key and the certificate to build-enclave command. However, it prevents storing the key in a secure storage like HSM or KSM, and using it only for signing without retrieving the key itself.
One solution is to allow to add the signature after the enclave was created, so the signature can be produced independently from the enclave creation.

command-executer README and build-enclave docs incomplete

A few obstacles that I faced while building the command-executer.

  1. cargo build is missing the --release directive
  2. NITRO_CLI_BLOBS should point to the x86_64 sub-directory of blobs, otherwise the build-enclave operation will not complete successfully
  3. Nowhere is the NITRO_CLI_BLOBS environment variable mentioned in the official docs. There is only a hint to its existence in the description of error E52
  4. The build-enclave operation completes successfully even without stating the docker-dir flag
  5. Couldn't understand from the docs what the docker-dir directive really does
  6. I would add a warning to the command-executer README that emphasizes how this utility creates a safety breach and should not be used in production enclaves

Can't build EIF from Docker image with `apt-get install snapd`

Docker successfully builds an image from a Dockerfile containing apt-get install snapd. But nitro-cli build-enclave fails, raising:

Using the locally available Docker image...
Linuxkit reported an error while creating the customer ramfs: "Add init containers:\nProcess init image: docker.io/library/image\nAdd files:\n  rootfs/dev\n  rootfs/run\n  rootfs/sys\n  rootfs/var\n  rootfs/proc\n  rootfs/tmp\n  cmd\n  env\nCreate outputs:\nfatal error: runtime: out of memory\n\nruntime stack:\nruntime.throw(0x7fdff6f1f812, 0x16)\n\t/usr/lib/go/src/runtime/panic.go:608 +0x74\nruntime.sysMap(0xc098000000, 0x84000000, 0x7fdff8bb4c98)\n\t/usr/lib/go/src/runtime/mem_linux.go:156 +0xc9\nruntime.(*mheap).sysAlloc(0x7fdff8b9a120, 0x84000000, 0x116bb8, 0x7fdff1697cd0)\n\t/usr/lib/go/src/runtime/malloc.go:619 +0x1c9\nruntime.(*mheap).grow(0x7fdff8b9a120, 0x41189, 0x0)\n\t/usr/lib/go/src/runtime/mheap.go:920 +0x44\nruntime.(*mheap).allocSpanLocked(0x7fdff8b9a120, 0x41189, 0x7fdff8bb4ca8, 0x20301300000000)\n\t/usr/lib/go/src/runtime/mheap.go:848 +0x339\nruntime.(*mheap).alloc_m(0x7fdff8b9a120, 0x41189, 0xffffffffffff0101, 0x7fdff1697d88)\n\t/usr/lib/go/src/runtime/mheap.go:692 +0x11d\nruntime.(*mheap).alloc.func1()\n\t/usr/lib/go/src/runtime/mheap.go:759 +0x4e\nruntime.(*mheap).alloc(0x7fdff8b9a120, 0x41189, 0x7fdff1010101, 0x7fdff5f89a99)\n\t/usr/lib/go/src/runtime/mheap.go:758 +0x8c\nruntime.largeAlloc(0x82310763, 0x7fdff5fc0101, 0x7fdff5259000)\n\t/usr/lib/go/src/runtime/malloc.go:1019 +0x99\nruntime.mallocgc.func1()\n\t/usr/lib/go/src/runtime/malloc.go:914 +0x48\nruntime.systemstack(0x0)\n\t/usr/lib/go/src/runtime/asm_amd64.s:351 +0x63\nruntime.mstart()\n\t/usr/lib/go/src/runtime/proc.go:1229\n\ngoroutine 1 [running]:\nruntime.systemstack_switch()\n\t/usr/lib/go/src/runtime/asm_amd64.s:311 fp=0xc00055ad20 sp=0xc00055ad18 pc=0x7fdff5fca530\nruntime.mallocgc(0x82310763, 0x7fdff74f4b60, 0xc04d62c001, 0xc00055adf8)\n\t/usr/lib/go/src/runtime/malloc.go:913 +0x8a8 fp=0xc00055adc0 sp=0xc00055ad20 pc=0x7fdff5f7f718\nruntime.makeslice(0x7fdff74f4b60, 0x82310763, 0x82310763, 0xc00055ae50, 0x7fdff5f9e463, 0x7fdff799f908)\n\t/usr/lib/go/src/runtime/slice.go:70 +0x79 fp=0xc00055adf0 sp=0xc00055adc0 pc=0x7fdff5fb53d9\nbytes.makeSlice(0x82310763, 0x0, 0x0, 0x0)\n\t/usr/lib/go/src/bytes/buffer.go:231 +0x6f fp=0xc00055ae30 sp=0xc00055adf0 pc=0x7fdff607238f\nbytes.(*Buffer).grow(0xc00013c1c0, 0x763, 0x0)\n\t/usr/lib/go/src/bytes/buffer.go:144 +0x16e fp=0xc00055ae80 sp=0xc00055ae30 pc=0x7fdff6071cde\nbytes.(*Buffer).Write(0xc00013c1c0, 0xc04d62c000, 0x763, 0x8000, 0x763, 0x0, 0x0)\n\t/usr/lib/go/src/bytes/buffer.go:174 +0xe7 fp=0xc00055aeb0 sp=0xc00055ae80 pc=0x7fdff6071fd7\ngithub.com/linuxkit/linuxkit/src/cmd/linuxkit/pad4.Writer.Write(0x7fdff79aa1a0, 0xc00013c1c0, 0x0, 0xc04d62c000, 0x763, 0x8000, 0x0, 0x0, 0x7fdff5259000)\n\t/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/pad4/pad4.go:16 +0x55 fp=0xc00055aef8 sp=0xc00055aeb0 pc=0x7fdff6589bc5\ngithub.com/linuxkit/linuxkit/src/cmd/linuxkit/pad4.(*Writer).Write(0xc0003d1680, 0xc04d62c000, 0x763, 0x8000, 0xc00055afd0, 0xc00055afd8, 0x8001)\n\t<autogenerated>:1 +0x76 fp=0xc00055af50 sp=0xc00055aef8 pc=0x7fdff6589fe6\ngithub.com/linuxkit/linuxkit/src/cmd/linuxkit/vendor/github.com/surma/gocpio.(*Writer).countedWrite(0xc00037a360, 0xc04d62c000, 0x763, 0x8000, 0xc00055b018, 0x7fdff637457d, 0x7fdff79aa520)\n\t/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/vendor/github.com/surma/gocpio/writer.go:105 +0x53 fp=0xc00055af98 sp=0xc00055af50 pc=0x7fdff658a973\ngithub.com/linuxkit/linuxkit/src/cmd/linuxkit/vendor/github.com/surma/gocpio.(*Writer).Write(0xc00037a360, 0xc04d62c000, 0x763, 0x8000, 0x763, 0x7fdff79aa520, 0xc00009a030)\n\t/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/vendor/github.com/surma/gocpio/writer.go:99 +0x5d fp=0xc00055afe0 sp=0xc00055af98 pc=0x7fdff658a8cd\ngithub.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd.(*Writer).Write(0xc000244300, 0xc04d62c000, 0x763, 0x8000, 0x763, 0x7fdff79aa520, 0xc00009a030)\n\t/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd/initrd.go:168 +0x4f fp=0xc00055b028 sp=0xc00055afe0 pc=0x7fdff658bb3f\nio.copyBuffer(0x7fdff79aa560, 0xc000244300, 0x7fdff79aa080, 0xc0000c6480, 0xc04d62c000, 0x8000, 0x8000, 0xc00055b150, 0x0, 0x0)\n\t/usr/lib/go/src/io/io.go:404 +0x203 fp=0xc00055b098 sp=0xc00055b028 pc=0x7fdff5fd6ce3\nio.Copy(0x7fdff79aa560, 0xc000244300, 0x7fdff79aa080, 0xc0000c6480, 0x0, 0x0, 0x2)\n\t/usr/lib/go/src/io/io.go:364 +0x5c fp=0xc00055b0f8 sp=0xc00055b098 pc=0x7fdff5fd69ac\ngithub.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd.copyTarEntry(0xc000244300, 0xc04d573c00, 0x7fdff79aa080, 0xc0000c6480, 0x15, 0x0, 0x0)\n\t/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd/initrd.go:87 +0x28d fp=0xc00055b1b0 sp=0xc00055b0f8 pc=0x7fdff658b1ed\ngithub.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd.CopySplitTar(0xc000244300, 0xc0000c6480, 0x1, 0x7fdff2f91c90, 0x7fdff5fc79b0, 0xc00055b2b0, 0xc00055b2c0, 0x10, 0x7fdff75f0700, 0x7fdff75f0700, ...)\n\t/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/initrd/initrd.go:144 +0x1d4 fp=0xc00055b250
sp=0xc00055b1b0 pc=0x7fdff658b484\ngithub.com/linuxkit/linuxkit/src/cmd/linuxkit/moby.tarToInitrd(0x7fdff79ad760, 0xc00000e020, 0x7fdff603da31, 0xc000230d70, 0xc000000300, 0xc0000ba0a0, 0x7fdff5f85473, 0x7fdff6f0a332, 0x7ffc35258673, 0xd, ...)\n\t/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/moby/output.go:272 +0x1cb fp=0xc00055b340 sp=0xc00055b250 pc=0x7fdff65a2fbb\ngithub.com/linuxkit/linuxkit/src/cmd/linuxkit/moby.glob..func2(0xc0000400f0, 0x2d, 0x7fdff79ad760, 0xc00000e020, 0x400, 0x1, 0xc00055b460, 0x7fdff60682a1)\n\t/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/moby/output.go:48 +0x4f fp=0xc00055b3e8 sp=0xc00055b340 pc=0x7fdff65a86ef\ngithub.com/linuxkit/linuxkit/src/cmd/linuxkit/moby.Formats(0xc0000400f0, 0x2d, 0xc000230d70, 0xe, 0xc00009b380, 0x1, 0x1, 0x400, 0x1, 0x0, ...)\n\t/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/moby/output.go:260 +0x29a fp=0xc00055b4a8 sp=0xc00055b3e8 pc=0x7fdff65a2d0a\nmain.build(0xc0000b8020, 0x7, 0x7)\n\t/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/build.go:220 +0x116c fp=0xc00055bf00 sp=0xc00055b4a8 pc=0x7fdff6eb036c\nmain.main()\n\t/go/src/github.com/linuxkit/linuxkit/src/cmd/linuxkit/main.go:121 +0x417 fp=0xc00055bf98 sp=0xc00055bf00 pc=0x7fdff6eb6127\nruntime.main()\n\t/usr/lib/go/src/runtime/proc.go:201 +0x20f fp=0xc00055bfe0 sp=0xc00055bf98 pc=0x7fdff5fa082f\nruntime.goexit()\n\t/usr/lib/go/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc00055bfe8 sp=0xc00055bfe0 pc=0x7fdff5fcc641\n\ngoroutine 19 [syscall]:\nos/signal.signal_recv(0x0)\n\t/usr/lib/go/src/runtime/sigqueue.go:139 +0x9e\nos/signal.loop()\n\t/usr/lib/go/src/os/signal/signal_unix.go:23 +0x24\ncreated by os/signal.init.0\n\t/usr/lib/go/src/os/signal/signal_unix.go:29 +0x43\n\ngoroutine 25 [IO wait]:\ninternal/poll.runtime_pollWait(0x7fdff2f91f00, 0x72, 0xc000064a88)\n\t/usr/lib/go/src/runtime/netpoll.go:173 +0x68\ninternal/poll.(*pollDesc).wait(0xc000266418, 0x72, 0xffffffffffffff00, 0x7fdff79affe0, 0x7fdff8b3eac0)\n\t/usr/lib/go/src/internal/poll/fd_poll_runtime.go:85 +0x9c\ninternal/poll.(*pollDesc).waitRead(0xc000266418, 0xc000246000, 0x1000, 0x1000)\n\t/usr/lib/go/src/internal/poll/fd_poll_runtime.go:90 +0x3f\ninternal/poll.(*FD).Read(0xc000266400, 0xc000246000, 0x1000, 0x1000,
0x0, 0x0, 0x0)\n\t/usr/lib/go/src/internal/poll/fd_unix.go:169 +0x17b\nnet.(*netFD).Read(0xc000266400, 0xc000246000, 0x1000, 0x1000, 0xc000064b70, 0x7fdff5fc91f0, 0xc000000300)\n\t/usr/lib/go/src/net/fd_unix.go:202 +0x51\nnet.(*conn).Read(0xc0000b2a40, 0xc000246000, 0x1000, 0x1000, 0x0, 0x0, 0x0)\n\t/usr/lib/go/src/net/net.go:177 +0x6a\nnet/http.(*persistConn).Read(0xc00041d7a0, 0xc000246000, 0x1000, 0x1000, 0xc000064c70, 0x7fdff5f78335, 0xc00008a4e0)\n\t/usr/lib/go/src/net/http/transport.go:1497 +0x77\nbufio.(*Reader).fill(0xc00026ff20)\n\t/usr/lib/go/src/bufio/bufio.go:100 +0x111\nbufio.(*Reader).Peek(0xc00026ff20, 0x1, 0x0, 0x0, 0x1, 0xc00008a420, 0x0)\n\t/usr/lib/go/src/bufio/bufio.go:132 +0x41\nnet/http.(*persistConn).readLoop(0xc00041d7a0)\n\t/usr/lib/go/src/net/http/transport.go:1645 +0x1a4\ncreated by net/http.(*Transport).dialConn\n\t/usr/lib/go/src/net/http/transport.go:1338 +0x943\n\ngoroutine 26 [select]:\nnet/http.(*persistConn).writeLoop(0xc00041d7a0)\n\t/usr/lib/go/src/net/http/transport.go:1885 +0x115\ncreated by net/http.(*Transport).dialConn\n\t/usr/lib/go/src/net/http/transport.go:1339 +0x968\n\ngoroutine 28 [IO wait]:\ninternal/poll.runtime_pollWait(0x7fdff2f91e30, 0x72, 0xc000065a88)\n\t/usr/lib/go/src/runtime/netpoll.go:173 +0x68\ninternal/poll.(*pollDesc).wait(0xc000266898, 0x72, 0xffffffffffffff00, 0x7fdff79affe0, 0x7fdff8b3eac0)\n\t/usr/lib/go/src/internal/poll/fd_poll_runtime.go:85 +0x9c\ninternal/poll.(*pollDesc).waitRead(0xc000266898, 0xc0000ca000, 0x1000, 0x1000)\n\t/usr/lib/go/src/internal/poll/fd_poll_runtime.go:90 +0x3f\ninternal/poll.(*FD).Read(0xc000266880, 0xc0000ca000, 0x1000, 0x1000, 0x0, 0x0, 0x0)\n\t/usr/lib/go/src/internal/poll/fd_unix.go:169 +0x17b\nnet.(*netFD).Read(0xc000266880, 0xc0000ca000, 0x1000, 0x1000, 0xc000065b70, 0x7fdff5fc91f0, 0xc000000300)\n\t/usr/lib/go/src/net/fd_unix.go:202 +0x51\nnet.(*conn).Read(0xc0000b2a78, 0xc0000ca000, 0x1000, 0x1000, 0x0, 0x0, 0x0)\n\t/usr/lib/go/src/net/net.go:177 +0x6a\nnet/http.(*persistConn).Read(0xc00041d9e0, 0xc0000ca000, 0x1000, 0x1000, 0xc000065c70, 0x7fdff5f78335, 0xc00008aa20)\n\t/usr/lib/go/src/net/http/transport.go:1497 +0x77\nbufio.(*Reader).fill(0xc00001a2a0)\n\t/usr/lib/go/src/bufio/bufio.go:100 +0x111\nbufio.(*Reader).Peek(0xc00001a2a0, 0x1, 0x0, 0x0, 0x1, 0xc00008a960, 0x0)\n\t/usr/lib/go/src/bufio/bufio.go:132 +0x41\nnet/http.(*persistConn).readLoop(0xc00041d9e0)\n\t/usr/lib/go/src/net/http/transport.go:1645 +0x1a4\ncreated by net/http.(*Transport).dialConn\n\t/usr/lib/go/src/net/http/transport.go:1338 +0x943\n\ngoroutine 29 [select]:\nnet/http.(*persistConn).writeLoop(0xc00041d9e0)\n\t/usr/lib/go/src/net/http/transport.go:1885 +0x115\ncreated by net/http.(*Transport).dialConn\n\t/usr/lib/go/src/net/http/transport.go:1339 +0x968\n\ngoroutine 31 [IO wait]:\ninternal/poll.runtime_pollWait(0x7fdff2f91d60, 0x72, 0xc000069a88)\n\t/usr/lib/go/src/runtime/netpoll.go:173 +0x68\ninternal/poll.(*pollDesc).wait(0xc00033a318, 0x72, 0xffffffffffffff00, 0x7fdff79affe0, 0x7fdff8b3eac0)\n\t/usr/lib/go/src/internal/poll/fd_poll_runtime.go:85 +0x9c\ninternal/poll.(*pollDesc).waitRead(0xc00033a318, 0xc000270000, 0x1000, 0x1000)\n\t/usr/lib/go/src/internal/poll/fd_poll_runtime.go:90 +0x3f\ninternal/poll.(*FD).Read(0xc00033a300, 0xc000270000, 0x1000, 0x1000, 0x0, 0x0, 0x0)\n\t/usr/lib/go/src/internal/poll/fd_unix.go:169 +0x17b\nnet.(*netFD).Read(0xc00033a300, 0xc000270000, 0x1000, 0x1000, 0x1, 0x0, 0x7fdff6eff010)\n\t/usr/lib/go/src/net/fd_unix.go:202 +0x51\nnet.(*conn).Read(0xc0000b2048, 0xc000270000, 0x1000, 0x1000, 0x0, 0x0, 0x0)\n\t/usr/lib/go/src/net/net.go:177 +0x6a\nnet/http.(*persistConn).Read(0xc00041c000, 0xc000270000, 0x1000, 0x1000, 0xc00007fe60, 0xc00041c000, 0x0)\n\t/usr/lib/go/src/net/http/transport.go:1497 +0x77\nbufio.(*Reader).fill(0xc00026e1e0)\n\t/usr/lib/go/src/bufio/bufio.go:100 +0x111\nbufio.(*Reader).Peek(0xc00026e1e0, 0x1, 0x2, 0x0, 0x0, 0xc00008a660, 0x0)\n\t/usr/lib/go/src/bufio/bufio.go:132 +0x41\nnet/http.(*persistConn).readLoop(0xc00041c000)\n\t/usr/lib/go/src/net/http/transport.go:1645 +0x1a4\ncreated by net/http.(*Transport).dialConn\n\t/usr/lib/go/src/net/http/transport.go:1338 +0x943\n\ngoroutine 32 [select]:\nnet/http.(*persistConn).writeLoop(0xc00041c000)\n\t/usr/lib/go/src/net/http/transport.go:1885 +0x115\ncreated by net/http.(*Transport).dialConn\n\t/usr/lib/go/src/net/http/transport.go:1339 +0x968\n"
[ E48 ] EIF building error. Such error appears when trying to build an EIF file. In this case, the error backtrace provides detailed information on the failure reason.

For more details, please visit https://docs.aws.amazon.com/enclaves/latest/user/cli-errors.html#E48

Any idea how to fix this? I'm expecting that it won't be easy to get snap run inside the enclave, but it would unlock a powerful usecase for me so I want to take a shot at it

allocator service fails but reports that it succeeded

This is from a m5.16xlarge instance. You can see the Memory configuration failed in the logs and free -h does not show 128G as reserved/in-use. The service incorrectly shows that it succeeded and there is a logged message towards the end showing it succeeded too.

[ec2-user@ip-172-31-34-34 ~]$ sudo systemctl status nitro-enclaves-allocator.service
● nitro-enclaves-allocator.service - Nitro Enclaves Resource Allocator
   Loaded: loaded (/usr/lib/systemd/system/nitro-enclaves-allocator.service; enabled; vendor preset: disabled)
   Active: active (exited) since Thu 2021-10-07 18:11:54 UTC; 5min ago
  Process: 8948 ExecStart=/usr/bin/nitro-enclaves-allocator (code=exited, status=0/SUCCESS)
 Main PID: 8948 (code=exited, status=0/SUCCESS)
    Tasks: 0
   Memory: 0B
   CGroup: /system.slice/nitro-enclaves-allocator.service

Oct 07 18:11:49 ip-172-31-34-34.us-east-2.compute.internal nitro-enclaves-allocator[8948]: Will try to reserve 131072 MB of memory on node 1.
Oct 07 18:11:49 ip-172-31-34-34.us-east-2.compute.internal nitro-enclaves-allocator[8948]: Configuring the huge page memory...
Oct 07 18:11:50 ip-172-31-34-34.us-east-2.compute.internal nitro-enclaves-allocator[8948]: - Reserved 123 pages of type: 1048576kB.
Oct 07 18:11:50 ip-172-31-34-34.us-east-2.compute.internal nitro-enclaves-allocator[8948]: - Reserved 662 pages of type: 2048kB.
Oct 07 18:11:50 ip-172-31-34-34.us-east-2.compute.internal nitro-enclaves-allocator[8948]: Memory configuration failed, rolling back memory reservations...
Oct 07 18:11:53 ip-172-31-34-34.us-east-2.compute.internal nitro-enclaves-allocator[8948]: Auto-generated the enclave CPU pool: 16,48,17,49,18,50,19,51,20,52,21,5...1,63.
Oct 07 18:11:53 ip-172-31-34-34.us-east-2.compute.internal nitro-enclaves-allocator[8948]: Configuring the enclave CPU pool...
Oct 07 18:11:54 ip-172-31-34-34.us-east-2.compute.internal nitro-enclaves-allocator[8948]: Done.
Oct 07 18:11:54 ip-172-31-34-34.us-east-2.compute.internal nitro-enclaves-allocator[8948]: Successfully allocated Nitro Enclaves resources: 131072 MiB, 32 CPUs
Oct 07 18:11:54 ip-172-31-34-34.us-east-2.compute.internal systemd[1]: Started Nitro Enclaves Resource Allocator.
Hint: Some lines were ellipsized, use -l to show in full.

This is the hash of the allocator script:

$ sha256sum /usr/bin/nitro-enclaves-allocator
916fdb9d7d7d769c9cf141b9794e11983fedc363ac43e9c139a96e43aa266163  /usr/bin/nitro-enclaves-allocator

The version above was installed with amazon-linux-extras/yum, but it matches the file in this repo.

Can not start sudo systemctl start nitro-enclaves-allocator.service

nitro-cli --version
Nitro CLI 1.2.0

uname -a
Linux amazonlinux.onprem 4.14.285-215.501.amzn2.x86_64 #1 SMP Mon Jun 27 23:38:14 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

systemctl status nitro-enclaves-allocator.service
● nitro-enclaves-allocator.service - Nitro Enclaves Resource Allocator
Loaded: loaded (/usr/lib/systemd/system/nitro-enclaves-allocator.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2022-07-19 09:29:26 UTC; 1h 11min ago
Process: 12792 ExecStart=/usr/bin/nitro-enclaves-allocator (code=exited, status=1/FAILURE)
Main PID: 12792 (code=exited, status=1/FAILURE)

sudo systemctl status nitro-enclaves-allocator.service
● nitro-enclaves-allocator.service - Nitro Enclaves Resource Allocator
Loaded: loaded (/usr/lib/systemd/system/nitro-enclaves-allocator.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2022-07-19 09:29:26 UTC; 1h 11min ago
Process: 12792 ExecStart=/usr/bin/nitro-enclaves-allocator (code=exited, status=1/FAILURE)
Main PID: 12792 (code=exited, status=1/FAILURE)

Jul 19 09:29:26 amazonlinux.onprem systemd[1]: Starting Nitro Enclaves Resource Allocator...
Jul 19 09:29:26 amazonlinux.onprem nitro-enclaves-allocator[12792]: /usr/bin/nitro-enclaves-allocator: line 130: /sys/module/nitro_enclaves/parameters/ne_cpus: No such file or directory
Jul 19 09:29:26 amazonlinux.onprem nitro-enclaves-allocator[12792]: cat: .tmp_file: No such file or directory
Jul 19 09:29:26 amazonlinux.onprem nitro-enclaves-allocator[12792]: rm: cannot remove ‘.tmp_file’: No such file or directory
Jul 19 09:29:26 amazonlinux.onprem nitro-enclaves-allocator[12792]: Error: The CPU pool file is missing. Please make sure the Nitro Enclaves driver is inserted.
Jul 19 09:29:26 amazonlinux.onprem systemd[1]: nitro-enclaves-allocator.service: main process exited, code=exited, status=1/FAILURE
Jul 19 09:29:26 amazonlinux.onprem systemd[1]: Failed to start Nitro Enclaves Resource Allocator.
Jul 19 09:29:26 amazonlinux.onprem systemd[1]: Unit nitro-enclaves-allocator.service entered failed state.
Jul 19 09:29:26 amazonlinux.onprem systemd[1]: nitro-enclaves-allocator.service failed.

Compile issue driver

Compiling the driver on an AWS VM running kernel 5.4.0-1048-aws gives:

make -C "/lib/modules/5.4.0-1048-aws/build" M=/home/ubuntu/Documents/aws-nitro-enclaves-cli/drivers/virt/nitro_enclaves modules
make[1]: Entering directory '/usr/src/linux-headers-5.4.0-1048-aws'
  CC [M]  /home/ubuntu/Documents/aws-nitro-enclaves-cli/drivers/virt/nitro_enclaves/ne_misc_dev.o
/home/ubuntu/Documents/aws-nitro-enclaves-cli/drivers/virt/nitro_enclaves/ne_misc_dev.c:143:12: error: static declaration of ‘remove_cpu’ follows non-static declaration
  143 | static int remove_cpu(u32 cpu_id)
      |            ^~~~~~~~~~
In file included from /home/ubuntu/Documents/aws-nitro-enclaves-cli/drivers/virt/nitro_enclaves/ne_misc_dev.c:13:
./include/linux/cpu.h:122:5: note: previous declaration of ‘remove_cpu’ was here
  122 | int remove_cpu(unsigned int cpu);
      |     ^~~~~~~~~~
/home/ubuntu/Documents/aws-nitro-enclaves-cli/drivers/virt/nitro_enclaves/ne_misc_dev.c:171:12: error: static declaration of ‘add_cpu’ follows non-static declaration
  171 | static int add_cpu(u32 cpu_id)
      |            ^~~~~~~
In file included from /home/ubuntu/Documents/aws-nitro-enclaves-cli/drivers/virt/nitro_enclaves/ne_misc_dev.c:13:
./include/linux/cpu.h:92:5: note: previous declaration of ‘add_cpu’ was here
   92 | int add_cpu(unsigned int cpu);
      |     ^~~~~~~
make[2]: *** [scripts/Makefile.build:271: /home/ubuntu/Documents/aws-nitro-enclaves-cli/drivers/virt/nitro_enclaves/ne_misc_dev.o] Error 1
make[1]: *** [Makefile:1760: /home/ubuntu/Documents/aws-nitro-enclaves-cli/drivers/virt/nitro_enclaves] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-5.4.0-1048-aws'
make: *** [Makefile:19: all] Error 2

I think the #if LINUX_VERSION_CODE < KERNEL_VERSION(5, 7, 0) compile check is incorrect. Changing it to kernel 5.4 resolves the compile error, but I didn't check if the driver still works correctly.

Instances with NitroEnclave support lack the required PCI device

Last year I started to work on NitroEnclave support for SUSE instances. This was difficult because the launched instances lacked the required PCI device, which the kernel driver expects.

It turned out an instance that used the base image named suse-sles-15-sp3-byos-v20210622-hvm-ssd-x86_64 got the required PCI device. But the very same instance type with an opensuse-tumbleweed-* image did not have the required PCI device. As a result the kernel driver can not work, and consequently the tooling found in this repository can not be used in the launched instance.

This bug does still exist today. Instances based on the upcoming suse-sles-15-sp4-* images lack the PCI device as well.

I was told the PCI device is always supposed to be present IF "Nitro Enclaves" are requested during instance creation via the webui.

Right now I do not have an instance ID at hand which lacks the PCI device, will provide that as soon as I get one up and running.

Allocator service triggers EINVAL error in dmesg

This piece of code https://github.com/aws/aws-nitro-enclaves-cli/blame/main/bootstrap/nitro-enclaves-allocator#L129-L138 is called every time the allocator service is started.

It writes an empty string to /sys/module/nitro_enclaves/parameters/ne_cpus. The driver however doesn't know how to interpret this empty string (see https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/virt/nitro_enclaves/ne_misc_dev.c#n431) so it tries to allocate this invalid cpu pool, fails, then prints EINVAL to dmesg.

Everything works perfectly fine however, so no actual issue just that seeing errors in dmesg is super confusing when debugging other stuff.

Edit: this is not an issue with nitro-cli, just with the driver.

nitro-enclaves-allocator: get_enclave_cpus not defined

When specifying a CPU pool instead of count in /etc/nitro_enclaves/allocator.yaml, I get the following error when starting nitro-enclave-allocator:

Nov 05 14:40:30 ip-172-31-24-166.ec2.internal nitro-enclaves-allocator[8407]: /usr/bin/nitro-enclaves-allocator: line 152: get_enclave_cpus: command not found

Also, specifying a count instead of a pool doesn't work properly if there are currently offline CPUs.

The error documentation hosts `enclaves.aws.amazon.com` does not seem to exist

nitro-cli generates errors that direct users to For more details, please visit http://enclaves.aws.amazon.com/nitro-cli/, but the host does not seem to be publicly accessible.

$  nitro-cli run-enclave --cpu-count 1 --memory 256 --eif-path hello.eif --debug-mode

[ E19 ] File operation failure. Such error appears when the system fails to perform the requested file operations, such as opening the EIF file when launching an enclave, or seeking to a specific offset in the EIF file, or writing to the log file.

For more details, please visit http://enclaves.aws.amazon.com/nitro-cli/errors#E19

If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2021-01-18T22:22:46.900607914+00:00.log"
Failed connections: 1
[ E39 ] Enclave process connection failure. Such error appears when the enclave manager fails to connect to at least one enclave process for retrieving the description information.

For more details, please visit http://enclaves.aws.amazon.com/nitro-cli/errors#E39

If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2021-01-18T22:22:46.901092465+00:00.log"

Assigning vGPU to Enclaves

I've been using nitro-cli extensively, so thank you!

However, currently, when we nitro-cli run-enclave we can define the vCPUs and RAM to assign to the enclave. There currently seems to be no way of assigning GPU resources. Is this in the roadmap? If not is there a workaround that you could suggest?

`nitro-cli build-enclave` failing with DockerError

If I use the nitro-cli that's packaged with the amazon-linux-extras install aws-nitro-enclaves-cli I can run build-enclave without issue (nitro-cli version reports Nitro CLI 1.0.9). But building the current main branch on the same EC2 instance that can successfully build with the packaged version (using the same arguments) results in:

./target/debug/nitro-cli build-enclave --docker-uri hello-enclave:1.0 --output-file what.eif
Start building the Enclave Image...
Docker error: PullError
[ E50 ] Docker image pull error. Such error appears when trying to build an EIF file, but pulling the corresponding docker image fails. In this case, the error backtrace provides detailed informatino on the failure reason.

For more details, please visit https://docs.aws.amazon.com/enclaves/latest/user/cli-errors.html#E50

If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2021-04-05T21:34:26.317152048+00:00.log"

And the /var/log/nitro_enclaves/err2021-04-05T21:34:26.317152048+00:00.log file contains:

 Action: Build Enclave
  Subactions:
    Failed to build enclave
    Failed to build EIF from docker
    Failed to pull docker image: DockerError
  Root error file: src/lib.rs
  Root error line: 145
  Build commit: 4d51695

FTBFS: unused return value of str::<impl str>::to_lowercase that must be used

[  390s]    Compiling enclave_build v0.1.0 (/home/abuild/rpmbuild/BUILD/aws-nitro-enclaves-cli-1.1.0~git13.d7c35e9/enclave_build)
[  398s] error: unused return value of `str::<impl str>::to_lowercase` that must be used
[  398s]    --> src/common/logger.rs:145:9
[  398s]     |
[  398s] 145 |         level.to_lowercase();
[  398s]     |         ^^^^^^^^^^^^^^^^^^^^^
[  398s]     |
[  398s] note: the lint level is defined here
[  398s]    --> src/common/logger.rs:4:9
[  398s]     |
[  398s] 4   | #![deny(warnings)]
[  398s]     |         ^^^^^^^^
[  398s]     = note: `#[deny(unused_must_use)]` implied by `#[deny(warnings)]`
[  398s]     = note: this returns the lowercase string as a new String, without modifying the original
[  398s] 
[  398s] error: could not compile `nitro-cli` due to previous error

Likely related to rust 1.57. (OBS package)

Segfault in init

Could not open /env file: No such file or directory
Could not open /env file: No such file or directory
[    0.793543] init[1]: segfault at 0 ip 0000000000410e84 sp 00007fff27e27340 error 4 in init[400000+d4000]

The IP corresponds to fclose. You can't call fclose(NULL).

Update documentation on whether/why kernel drivers are required for building enclave image files

It's unclear whether the kernel drivers are needed for the build-enclave command.

This makes it harder to incorporate nitro-cli in existing CD pipelines. Which is totally fine too, the wins are aplenty and would justify the cost, but can we please update the documentation to highlight why it's needed?

If the kernel drivers are not needed for the build phase, maybe nitro-cli can be provided as a Docker image with a subset of commands being available when running without the Nitro kernel drivers?

Documentation and error messages do not make it clear that Linux hugepages are a requirement for starting an enclave

Trying to start / run an enclave from a default EKS pod fails with an error.

$ ./run.sh
+ nitro-cli run-enclave --cpu-count 2 --memory 512 --eif-path hello.eif --debug-mode
Start allocating memory...
[ E35 ] EIF file parsing error. Such error appears when attempting to fill a memory region with a section of the EIF file, but reading the entire section fails.

For more details, please visit http://enclaves.aws.amazon.com/nitro-cli/errors#E35

If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2021-02-19T23:19:46.661390373+00:00.log"
Failed connections: 1
[ E39 ] Enclave process connection failure. Such error appears when the enclave manager fails to connect to at least one enclave process for retrieving the description information.

For more details, please visit http://enclaves.aws.amazon.com/nitro-cli/errors#E39

If you open a support ticket, please provide the error log found at "/var/log/nitro_enclaves/err2021-02-19T23:19:46.661536416+00:00.log"
$ cat /var/log/nitro_enclaves/err2021-02-19T23:19:46.661390373+00:00.log
  Action: Run Enclave
  Subactions:
    Failed to execute command `Run`
    Failed to trigger enclave run
    Failed to run enclave
    Failed to create enclave
    Memory initialization issue
    Write EIF to enclave memory regions
    Failed to fill region with file content
    Error while reading from enclave image: Os { code: 14, kind: Other, message: "Bad address" }
  Root error file: src/enclave_proc/resource_manager.rs
  Root error line: 295
  Build commit: not available
$ cat /var/log/nitro_enclaves/err2021-02-19T23:19:46.661536416+00:00.log
  Action: Run Enclave
  Subactions:
    Failed to handle all enclave process replies
    Failed to connect to 1 enclave processes
  Root error file: src/enclave_proc_comm.rs
  Root error line: 344
  Build commit: not available

Even with privileged mode this still fails.

After some troubleshooting it became clear that hugepages needs to be made available inside the pod using something like this.

        requests:
          hugepages-2Mi: 512Mi

The docs/error should make it a bit more clear that this could be the issue.

Importing Numpy and OpenCV in enclave

Is there a compatible issue in enclave instance with python packages such as cv2 and NumPy? When I import those libraries the instance just hangs. No error output or termination, just an OpenBLAS warning about L2 cache.
Just some example code of the import

#amazonlinux still have the import issue
#python:3.7 libs importing crush
FROM amazonlinux

WORKDIR /app
#py 3.7
RUN yum install python3 zip -y

ENV VIRTIAL_ENV=/opt/venv
RUN python3 -m venv $VIRTIAL_ENV
ENV PATH="$VIRTIAL_ENV/bin:$PATH"

#3 libs needed for cv2 import
RUN yum install libSM-1.2.2-2.amzn2.x86_64 -y
RUN yum install libXrender-0.9.10-1.amzn2.x86_64 -y
RUN yum install libXext-1.3.3-3.amzn2.x86_64 -y

COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r /app/requirements.txt

#shell script testing
COPY dockerfile_entrypoint.sh ./

COPY test_cv2.py ./


#ENV for shell testing printf loop
ENV HELLO="Hello from enclave side!"
RUN chmod +X dockerfile_entrypoint.sh

#shell script testing
CMD ["/app/dockerfile_entrypoint.sh"]

The shell and py scripts are just an import to show things are working

#!/bin/bash

#shell printf loop test in enclave
# go to work dir and check files
cd /app||return
ls

#cv2 imp issue
python3 test_cv2.py

#use shell loop to keep enclave live to see error message output
count=1
while true;do
  printf "[%4d] $HELLO\n" $count
  echo "$PWD"
  ls
  count=$((count+1))
  sleep 5
done
import cv2

for i in range(10):
    print('testing OpenCV')

Sample command executor fails to build enclave

Running this command from the readme:

nitro-cli build-enclave --docker-dir "./resources" --docker-uri mytag --output-file command-executer.eif

throws an error

Docker error: BuildError
[ E49 ] Docker image build error. Such error appears when trying to build and EIF file, but building the corresponding docker image fails. In this case, the error backtrace provides detailed information on the failure reason.

Looking at the Docker file, it expects the command-executor file in current directory for the COPY command, but from the cargo build it is available in the target/release folder? Is this step assuming both Docker file and command-executor binary are moved to the same folder and then the build-enclave command issued?

FROM ubuntu:latest
  COPY command-executer .
  RUN apt-get update && apt-get install -y \
      apt-utils
  CMD ./command-executer listen --port 5005

update vsock driver

For my use case, it is important that the vsock channel preserves packet boundaries. Vsock was updated to support this (https://lwn.net/Articles/846628/), which became part of the linux kernel since June (torvalds/linux@ced7b71).

How can I best "update" the vsock (driver?) that my ec2 and enclave use to the most recent version? I'm currently running the
"AWS Nitro Enclaves Developer AMI v1.01" which runs the Linux kernel "Linux 4.14.256-197.484.amzn2.x86_64 x86_64", but the vsock update is only contained in Linux Kernel v5.14 and onwards, so it should be available in Fedora 34.

Would you recommend switching to a recent enough Fedora 34 AMI, and following https://github.com/aws/aws-nitro-enclaves-cli/blob/main/docs/fedora_34_how_to_install_nitro_cli_from_github_sources.md to get the Nitro CLI running on there? or is there an easier solution?

Thanks!

Edit: Tried to follow the tutorial with the Fedora-Cloud-Base-35-1.2.x86_64-hvm-us-east-2-gp2-0 AMI, which comes with the 5.15.7-200.fc35.x86_64 kernel. Everything works until I want to start an .eif. It gets stuck at Start allocating memory.... If I cancel and try again, I get an ioctl error - until I sudo reboot, in which case it gets stuck again. Edit2: Same thing with the Fedora-Cloud-Base-34-20211214.0.x86_64-hvm-us-east-2-gp2-0 AMI.

Cannot build an enclave file after a couple of successful runs of nitro-cli build-enclave

After a couple of successful builds nitro-cli crashes with:
Linuxkit reported an error while creating the customer ramfs: "Add init containers: Process init image: docker.io/library/<my image> Add files: rootfs/dev rootfs/run rootfs/sys rootfs/var rootfs/proc rootfs/tmp cmd env Create outputs:"

The /log/nitro-enclaves says:
Action: Build Enclave Subactions: Failed to build enclave Failed to build EIF from docker Failed to create EIF image: LinuxkitExecError Root error file: src/lib.rs Root error line: 152 Build commit: v1.0.10-42-gde77067
The only solution to fix this that I have found is to restart EC2 machine.

env issue with python docker image

Based on https://levelup.gitconnected.com/running-python-app-on-aws-nitro-enclaves-56024667b684, the docker python image does not start under an enclave. Investigate why this is happening:

For Python app development, using Python base image would be an obvious step. But for some reason, the python image is not running properly in the Nitro Enclave.
No matter I use the standard image, -slim image or -alpine image, even my Dockerfile is just running a simple echo command, an error Could not open /env file: No such file or directory will come up.

Pre-launch certificate validation in EIF

Currently (in this "0.1.0" pre-release that's on AWS images), if one tries to do nitro-cli run-enclave with EIFs that contain invalid certificates (e.g. expired), it will lead to a series of obscure error messages that may be confusing to the end users ([ E36 ] Enclave boot failure...., [ E39 ] Enclave process connection failure..., [ E11 ] Socket error.... ... with detailed logs saying something like:

  Action: Run Enclave
  Subactions:
    Failed to execute command `Run`
    Failed to trigger enclave run
    Failed to run enclave
    Failed to create enclave
    Waiting on enclave to boot failed with error VsockTimeoutError
  Root error file: src/enclave_proc/resource_manager.rs
  Root error line: 572

).

In order to help with troubleshooting such problems, it may be good to either extend the run-enclave command to validate EIFs before attempting to load them or add a new command (validate-eif?) to nitro-cli .

SOURCES and sources in the root dir conflict on case-insensitive filesystems like HFS/APFS.

On a macOS 11.2:

❯ git clone [email protected]:aws/aws-nitro-enclaves-cli.git
Cloning into 'aws-nitro-enclaves-cli'...
remote: Enumerating objects: 25, done.
remote: Counting objects: 100% (25/25), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 2772 (delta 4), reused 15 (delta 2), pack-reused 2747
Receiving objects: 100% (2772/2772), 49.28 MiB | 24.71 MiB/s, done.
Resolving deltas: 100% (1887/1887), done.
warning: the following paths have collided (e.g. case-sensitive paths
on a case-insensitive filesystem) and only one from the same
colliding group is in the working tree:

  'SOURCES'
  'sources'

I would recommend renaming one of these (or moving one into a subdir) unless it is critical that they be setup this way.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.