Code Monkey home page Code Monkey logo

kbs's Introduction

Trusted Components for Attestation and Secret Management

FOSSA Status

This repository contains tools and components for attesting confidential guests and providing secrets to them. Collectively, these components are known as Trustee. Trustee typically operates on behalf of the guest owner and interact remotely with guest components.

Trustee was developed for the Confidential Containers project, but can be used with a wide variety of applications and hardware platforms.

Components

For further information, see documentation of individual components.

Architecture

Trustee is flexible and can be deployed in several different configurations. This figure shows one common way to deploy these components in conjunction with certain guest components.

flowchart LR
    AA -- attests guest ----> KBS
    CDH -- requests resource --> KBS
    subgraph Guest
        CDH <.-> AA
    end
    subgraph Trustee
        AS -- verifies evidence --> KBS
        RVPS -- provides reference values--> AS
    end
    client-tool -- configures --> KBS

Deployment

There are two main ways to deploy Trustee.

Docker Compose

One simple way to get started with Trustee is with Docker compose, which can be used to quickly setup a cluster matching the diagram above.

Please refer to the cluster setup guide.

This cluster could be run inside a VM or as part of a managed service.

Kubernetes

There are two supported ways of deploying Trustee on Kubernetes. One is via the KBS Operator, which deploys the KBS components. The second option is to use the KBS' provided Kubernetes tooling here.

License

FOSSA Status

kbs's People

Contributors

baoshunfang avatar bbolroc avatar chendave avatar chengyuzhu6 avatar dependabot[bot] avatar fidencio avatar fitzthum avatar gabyct avatar haosanzi avatar jepio avatar jialez0 avatar jodh-intel avatar johananl avatar kartikjoshi21 avatar katexochen avatar liangzhou121 avatar lmilleri avatar lu-biao avatar mkulke avatar mythi avatar octaviansima avatar portersrc avatar sameo avatar spotlesstofu avatar surajssd avatar thomas-fossati avatar tnakaike avatar wainersm avatar wobito avatar xynnn007 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kbs's Issues

Reference Value Provider Service Crate

As the proposal confidential-containers/confidential-containers#122,
RVPS is a key component for AS to gather the trusted digests. TODO list here

Stage 1

Optimization

Support confidential resource register API

When #24 is finished, KBS will support a user to retrieve confidential resources. Let's think about a new question, s.t. how does the user of the KBS register new resources? As we will use different modules to leverage the underlying storage, like local filesystem, I think it is needed to have a way to give a user ability to registry new resources. I've considered two ways, and please feel free to give any suggestions:

  • The user can directly add new resources based on the underlying storage. Like in localFs, a user can create directories and files under REPO_PATH
  • KBS server provides another API and implement a new client tool to connect it.
    • Over http. As openapi over http can be accessed by entities from different users, this way we should start a single http endpoint open to localhost or specific network without authentication.
    • Over gRPC. Same as http

The second way might be better, for this way we can use the API to build new tools, like a tool to integrate the whole process of image encryption, KEK registration (call registry API in this stage), image signing and push.

Minimal CI

We could start by validating the OpenAPI 3.1 description file, with redocly lint docs/kbs.yaml

[RFC] General Attestation Service Design Proposal

Design

According to [RFC] Generic Key Broker Service (KBS) & Attestation Service high level architecture proposal, CC's general Attestation system relies on Attestation Service and KBS to verify the Attestation-Agent's running environment and distribute critical messages(such as KEK). The KBS is the role of RATS Relying Party, it's hardware agnostically and focuses on vendor-specific requirements. The Attestation Service is an implementation of the RATS Verifier role, it focuses on the verification of Evidence's identity and TCB status.

And the CC's Attestation Service is a general infrastructure service that mostly focuses on providing a Evidence verification service to KBS. So it needs to be compatible with all CC-supported HW-TEEs and should have excellent scalability to support potential new HW-TEE in the future. It also can be compatible with existing third-party attestation services with modularization.

To support the verification of TCB status, Attestation Service has the capability of parse the received Evidence to extract its corresponding TCB status. At the same time, it also includes the Policy Engine component which is an implementation of RATS Verifier Owner, It relies on the reference data from Reference Value Provider and the info extracted from Evidence to match the Attestation-Agent's running environment's TCB status dynamically. And generate the corresponding Attestation Results.

Attestation Service supports different deployment modes, as a service or as a library. One Attestation Service maybe needs to support several KBS services simultaneously when it is deployed as an infrastructure service. This will enhance the flexibility of the service's deployment.

Goals

  • Define the format of Evidence and Attestation Results messages for SGX and TDX.
  • Distribute the Attestation Service as a library or single service.
  • It's a general infrastructure service that:
    • It should compatible with various HW-TEEs and have good scalability.
    • It should compatible with different third-party attestation Services.
  • Support management API to set reference data by Reference Value Provider and set other configurations.

Non-Goals

  • Define the format of Evidence and Attestation Results messages for SEV(-ES) and SEV-SNP.
  • Proxy sub-modules that support existing third-party attestation services.

Architecture

The following diagram demonstrates the overall architecture and internal modules:
image

  • Service: Provides interface that can be used to communicate with KBS. It receives Evidence from KBS and returns the Attestation Results.
  • Attestation: The core module that implements Evidence's attestation.
    • Verification Drivers: Includes HW-TEE sepcific sub-modules and policy module to verify Evidence identity and TCB status.
      1. TDX/SGX/SEV-SNP/SEV(-ES): Pluggable and HW-TEE specific sub-modules which invoke Endorser to verify Evidence's identity.
      2. Policy Engine: Act as RATS Verifier Owner to verify TCB status.
    • Proxy: Includes pluggable proxy sub-modules that can invoke third-party attestation services to attest the Evidence.
  • Management API: It provides APIs that are used by Reference Value Provider to set Verifier Owner reference data and execute some other configurations.

Service Module

This module is used to communicate with external modules such as KBS, its mainly features are:

  • Support the following two different running modes:
    • As a Library: It can be integrated and invoked by KBS directly.
    • As a Service: Running as a separate service.
  • Receive the Evidence sent by KBS and return the corresponding Attestation Results.

Evidence

Different types of HW-TEE Evidences are various, as example:

  • TDX Evidence:
{
    "tee" : "tdx",
    "quote": <Base64 encoded quote>,
    // The ehd's hash will be embeded in Quote->report_data.
    "ehd": 
    {
        "nonce" : "cdfunsas23445xd",
        "public-key" : "xxxxxxxx"
    },
    "aad": 
    {
        "tdelInfo" : <Base64 encoded eventlog's information>,
        "tdelData" : <Base64 encoded eventlog's data>
    }
}
  • SGX Evidence:
{
    "tee" : "sgx",
    "quote": <Base64 encoded quote>,
    //The ehd's hash will be embeded in Quote->report_data.
    "ehd": 
    {
        "nonce" : "cdfunsas23445xd",
        "public-key" : "xxxxxxxx"
    },
    "aad":{}
}
  • SEV-SNP Evidence: TBD

Attestation Results

The Entity Attestation Token (EAT) format compliant Attestation Results of the Evidence. Its format should be:

{
    "alg": "RS256",
    "jku": "https://xxxxx.xxx/certs",
    "kid": <self signed certificate reference to perform signature verification of attestation token,
    "typ": "JWT"
}.{
    "exp": 1568187398,
    "iat": 1568158598,
    "iss": "https://xxxxxx",
    "attestation_results":{}
}.[Signature]
  • attestation_results: The result of the Evidence's attestation.
  • Signature (Optional): The value will be empty in the following scenario:
    • Attestation Service is called as a Crate.
    • The connection between Attestation Service and KBS is security and trusted.

The attestation_results of different types of HW-TEE are various, as examples:

  • TDX Evidence attestation results:
"attestation_results":
{
    "tee": "TDX",
    "attestation-type": "TDX",
    "policy":
    {
        "allow": "true", //true/false: the verification state
        "name": <>, //The name of the used evaluation policy file
        "hash": <>, //The hash of the used evaluation policy file
        "diagnose":"" //Diagnose message from Policy Engine
    }
    "tcb":
    {
        "collateral": 
        {
            "qeidcertshash": <SHA256 value of QE Identity issuing certs>,
            "qeidcrlhash": <SHA256 value of QE Identity issuing certs CRL list>,
            "qeidhash": <SHA256 value of the QE Identity collateral>,
            "quotehash": <SHA256 value of the evaluated quote>, 
            "tcbinfocertshash": <SHA256 value of the TCB Info issuing certs>, 
            "tcbinfocrlhash": <SHA256 value of the TCB Info issuing certs CRL list>, 
            "tcbinfohash": <SHA256 value of the TCB Info collateral>
         },
        "ehd": <>, 
        "is-debuggable": true,
        "mrseam": "xxxxxx",
        "mrseamsigner": "xxxxxx", 
        "mrtd": "xxxxxx", 
        "rtmr0": "xxxxxx",
        "rtmr1": "xxxxxx",
        "rtmr2": "xxxxxx",
        "rtmr3": "xxxxxx",
        "cpusvn": 1,
        "svn": 1	  
    }
}
  • SGX Evidence attestation results:
"attestation_results":
{
    "tee": "SGX",
    "attestation-type": "SGX",   //String value representing attestation type
    "policy":
    {
        "allow": "true", //true/false: the verification state
        "name": <>, //The name of the used evaluation policy file
        "hash": <>, //The hash of the used evaluation policy file
        "diagnose":"" //Diagnose message from Policy Engine
    }
    "tcb":
    {
        "collateral": 
         {
            "qeidcertshash": <SHA256 value of QE Identity issuing certs>,
            "qeidcrlhash": <SHA256 value of QE Identity issuing certs CRL list>,
            "qeidhash": <SHA256 value of the QE Identity collateral>,
            "quotehash": <SHA256 value of the evaluated quote>, 
            "tcbinfocertshash": <SHA256 value of the TCB Info issuing certs>, 
            "tcbinfocrlhash": <SHA256 value of the TCB Info issuing certs CRL list>, 
            "tcbinfohash": <SHA256 value of the TCB Info collateral>
        },
        "ehd": <>, 
        "is-debuggable": true,
        "mrenclave": <SGX enclave mrenclave value>,
        "mrsigner": <SGX enclave msrigner value>, 
        "product-id": 1, 
        "svn": 1	  
    }
}
  • SEV-SNP Evidence attestation results:
    TODO

As a Library

The Attestation Service will provide the following interface:

pub fn attestation(evidence: String) -> Result<String, String> {}

attestation():

  • Role: Verify the evidence and return the corresponding Attestation Results.
  • Evidence(Input): Attestation-Agent generated Evidence. It's a JSON string that includes the target HW-TEE identity and TCB status information that needs to be verified.
  • Attestation Results(Output): The Entity Attestation Token (EAT) format compliant Attestation Results of the Evidence.

As a Service

The Attestation Service can be extended to add a separate service application which also relies on the upper library. Its Cargo.toml file example:

[[bin]]
name = "attestation_service"
path = "app/attestation_service/src/main.rs"

Note: The service can be connected with localhost:port only in POC stage.

The POC will select the gRPC, so its proto file should be:

syntax = "proto3";

package attestation;

message AttestationRequest {
    bytes evidence = 1;
}
message AttestationResponse {
    bytes results = 1;
}

service AttestationService {
    rpc Attestation(AttestationRequest) returns (AttestationResponse) {};
}

Attestation Module

Its responsibility is to verify these various received Evidence. It includes the following two internal modules:

  • Verification Drivers: It includes different HW-TEE specific modules and some common modules. The former verifies the Evidence's identity, the common modules verify the TCB status.
  • Proxy: It includes proxy sub-modules that are compatible with third-party Attestation Service in order to use these services to attest Evidence directly.

It also defines a trait that needs to be implemented by its internal modules.

trait Attestation{
    fn attestation(&self, evidence : String) ->Result<String, String>;
}

Verification Drivers

It includes different HW-TEE specific pluggable verification modules and some common modules to verify the Evidence's identity and TCB status:

  • TDX/SGX/SEV(-ES)/SEV-SNP (pluggable): HW-TEE specific modules.
  • Eventlog-rs (common): Parse the TCG protocol compliant eventlog inside Evidence.
  • Policy Engine (common): Verify TCB status via Develop Policy as Code mechanism.

TDX/SGX/SEV(-ES)/SEV-SNP

Its features:

  • Each module corresponds to a specific HW-TEE type.
  • Each module can be enabled/disabled by feature, such as #[cfg(feature = "tdx")].
  • It can verify the received Evidence's identity.
  • It can extract the TCB information inside the received Evidence.

Eventlog-rs

To verify all running programs' measurements inside TEE-based VM, the TCG protocol compliant eventlog will be included inside Evidence. So the eventlog-rs is used to parse that eventlog and extract the following components' measurements which will be sent to Policy Engine for final verification (as an example):

  • Bootloader
  • Kernel Parameters
  • Kernel
  • ...

Note: Only the TDX supports eventlog currently.

Policy Engine

It uses Open Policy Engine to reach Develop Policy as Code. And OPA relies on the following files to execute verification:

  • OPA Input (JSON): These data need to be verified such as TCB status and running programs' measurement.
  • OPA Data (JSON): The reference data that is provided by Reference Value Provider.
  • OPA Policy (.rego): The judgment policy.rego file.
Policy.rego

The different HW-TEE TCB status's verification policy is various, as examples:

  • Policy.rego used to check Bootloader, Kernel Parameters, and Kernel measurements extracted from eventlog:
package policy

import future.keywords.in
default allow = false
allow {
    bootloader_is_granted
    kernel_is_granted
	kernelparameters_is_granted
}

bootloader_is_granted {
    count(data.bootloader.hashes) == 0
}
bootloader_is_granted {
    input.bootloader == data.bootloader.hashes[_]
}
kernel_is_granted {
    count(data.kernel.hashes) == 0
}
kernel_is_granted {
    input.kernel == data.kernel.hashes[_]
}
kernelparameters_is_granted {
    count(data.parameters.hashes) == 0
}
kernelparameters_is_granted {
    input.parameters == data.parameters.hashes[_]
}
  • Policy.rego used to check SGX Evidence's TCB information:
package policy

# By default, deny requests.
default allow = false

allow {
    mrEnclave_is_grant
    mrSigner_is_grant
    input.productId >= data.productId
    input.svn >= data.svn
}

mrEnclave_is_grant {
    count(data.mrEnclave) == 0
}
mrEnclave_is_grant {
    count(data.mrEnclave) > 0
    input.mrEnclave == data.mrEnclave[_]
}

mrSigner_is_grant {
    count(data.mrSigner) == 0
}
mrSigner_is_grant {
    count(data.mrSigner) > 0
    input.mrSigner == data.mrSigner[_]
}

Proxy

It includes pluggable proxy sub-modules that are used to support third-party Attestation Services, such as:

  • Microsoft Azure Attestation Service
  • ISecL Attestation Service

Management API Module

Attestation Service also enables a management API to provide the following functionalities:

  • Set Policy Engine's reference data by Reference Value Provider.
  • Set proxy sub-modules configurations.

So this module provides the following interfaces:

fn set_policy(request: &SetPolicyEnginePolicyRequest) -> Result<(), String> {};
fn export_policy(request: &ExportPolicyEnginePolicyRequest) -> Result<PolicyEnginePolicy, String> {};
fn set_reference(request: &SetPolicyEngineReferenceRequest) -> Result<(), String> {};
fn export_reference(request: &ExportPolicyEngineReferenceRequest) -> Result<PolicyEngineReference, String> {};
fn test(requets: &TestPolicyEngineRequest) -> Result<TestPolicyEngineResponse, String> {};

As a Service

If the Attestation Service is built as a service, it will provide these functions as GRPC services. So its proto file should be:

syntax = "proto3";

package managementapi;

message SetPolicyEnginePolicyRequest {
    bytes name = 1;
    bytes content = 2;
}
message SetPolicyEnginePolicyResponse {
    bytes status = 1;
}

message SetPolicyEngineReferenceRequest {
    bytes name = 1;
    bytes content = 2;
}
message SetPolicyEngineReferenceResponse {
    bytes status = 1;
}

message ExportPolicyEnginePolicyRequest {
    bytes name = 1;
}
message ExportPolicyEnginePolicyResponse {
    bytes status = 1;
    bytes content = 2;
}

message ExportPolicyEngineReferenceRequest {
    bytes name = 1;
}
message ExportPolicyEngineReferenceResponse {
    bytes status = 1;
    bytes content = 2;
}

message TestPolicyEngineRequest {
    bytes policyname = 1;
    bytes policycontent = 2;
    bool policylocal = 3;
    bytes referencename = 4;
    bytes referencecontent = 5;
    bool referencelocal = 6;
    bytes input = 7;
}
message TestPolicyEngineResponse {
    bytes status = 1;
}

service PolicyEngineService {
    rpc SetPolicy(SetPolicyEnginePolicyRequest) returns (SetPolicyEnginePolicyResponse) {};
    rpc exportPolicy(ExportPolicyEnginePolicyRequest) returns (ExportPolicyEnginePolicyResponse) {};
    rpc setReference(SetPolicyEngineReferenceRequest) returns (SetPolicyEngineReferenceResponse) {};
    rpc exportReference(ExportPolicyEngineReferenceRequest) returns (ExportPolicyEngineReferenceResponse) {};
    rpc Test(TestPolicyEngineRequest) returns (TestPolicyEngineResponse) {};
}

Opens

  • Currently the Attestation Service only supports the runtime attestation (TDX/SEV-SNP/SGX), the pre-attestation (SEV(-ES)) is supported by simple-kbs.

Reference

Reduce build dependency

Now AS imports some dependencies like tonic, which is only used in grpc-as or build.rs but not in lib. Some refactoring needs to be impled to delete useless build dependencies by defining features.

Think of a good way to integrate RVPS into KBS

Now KBS integrates AS as a mod. However, inside AS the evaluation logic will request RVPS for digests.

As the design, RVPS now provide two basic APIs

This makes RVPS have to monopolize a thread to process input and retrieve requests concurrently and continuously. Thus there might be two designs

  • Everytime a AttestationService object is created, an rvps inside will spawn an tokio green thread to process input and retrieve and shutdown requests ( shutdown triggered by the drop of the AttestationService object). This way works like spawn a go routine in golang. In this way the outcaller of AttestationService should specify an extra port to listen to receive input requests, which actually inner RVPS uses.
  • RVPS runs as a single binary, and AS will connect RVPS using grpc or something else. In this way when we create an AS object, we need to specify the remote address of RVPS.

Way 1: There are only one binary running. However the rvps and AS' caller will share a same process but different green thread, when the rvps crashes the AS module together with caller like KBS will crash. Also it seems somehow weird.
Way 2: Although it will bring extra running binaries, we can provide a docker-compose to help user to quickly set-up a KBS + RVPS micro service.

I'm not sure which one is better although a litter agree with way 2, or we impl both of them, and add a switch in KBS:

  • If the rvps' remote addr is specified, use way 2
  • If listen to a local addr, use way 1

Build error for kbs-types definition

There is a new error

   Compiling attestation-service v0.1.0 (https://github.com/confidential-containers/attestation-service.git#de05cf53)
error[E0609]: no field `k` on type `TeePubKey`
  --> /usr/local/cargo/git/checkouts/attestation-service-fed607e36ff88ac9/de05cf5/src/verifier/sample/mod.rs:31:47
   |
31 |         hasher.update(&attestation.tee_pubkey.k);
   |                                               ^ unknown field
   |
   = note: available fields are: `alg`, `k_mod`, `k_exp`

For more information about this error, try `rustc --explain E0609`.
error: could not compile `attestation-service` due to previous error
error: failed to compile `kbs v0.1.0 (/usr/src/kbs/src/kbs)`, intermediate artifacts can be found at `/usr/src/kbs/target`

Because a new commit in kbs-types changed the field name virtee/kbs-types@183dfcb . I suggest to fix the rev of kbs-types to avoid this kind of error

Support broking confidential resources which `image-rs` needs.

This generic KBS will replace simple-kbs and verdictd as the default recommended component of the CoCo solution, so it needs to support providing existing secret resources. Currently, in the confidential containers architecture, the component requesting resources from KBS through the Attestation Agent is image-rs, so our generic KBS needs to support the provision of the following kinds of resources required by image-rs:

  1. Container image decryption key.
  2. Registry pulling policy file.
  3. Container image signature verification key, including cosign and simple signing.
  4. Registry authentication informations.

This work is based on #16.

TDX Attestation Driver.

TDX attestation verifier driver needs to be added, which implement the Verifier Trait.

The driver should support the following functions:

  1. Parse Attestation message to get TDX quote and Eventlog.
  2. Verify the signature of TDX quote.
  3. Compare if the hash of nonce||pubkey is same as the report data in TDX quote.
  4. Dump the TCB status (MRCONFIG, etc..) and measurements from TDX quote and Eventlog (measure of kernel, kernel cmdline, rootfs).

Modify Verifier Driver Trait to support acquisition of collateral.

As mentioned in issue #191 , when validating the tee attestation report, some collateral, such as certificate chain, is required. It is a good idea to obtain collateral in AS. If we do so, we can add a general cache facility for AS to store them in the future.

But the current definition of trait of verifier driver is as follows:

pub trait Verifier {
    /// Verify the hardware signature and report data in TEE quote.
    /// If the verification is successful, a key-value pairs map of TCB status will be returned,
    /// The policy engine of AS will carry out the verification of TCB status.
    async fn evaluate(&self, evidence: &Evidence) -> Result<TeeEvidenceParsedClaim>;
}

If we want to obtain the collateral like certificate chain in AS, the above verifier driver trait cannot meet the requirements. We need to disassemble it into the following forms:

pub trait Verifier {
    async fn parse_tee_quote(&self, tee_evidence: String) -> Result<TeeEvidenceParsedClaim>;
    async fn verify(&self, tee_evidence: String, collateral: String, report_data: String) -> Result<()>;
}

In this way, AS can first call dump_tcb_status, dump the TCB information from TEE attestation report, then use the TCB status to obtain the collateral from the cache server or the remote end, and then call verify to verify the TEE attestation report.

@fitzthum @dubek @niteeshkd

Add cicd basic compile checking.

Problem

The Attestation Service didn't inlcude a cicd basic compile checking to check the latest push and pull_request.

Goal

Add a cicd basic compile checking to this project. The basic check includes:

cargo build
cargo test
cargo fmt --all -- --check
cargo clippy -- -D warnings

Support for modular KMS components

Now key management has a relatively mature industry implementation, it is required for KBS to support different KMSes as the backend to store keys and secrets. In this scenario, KBS plays a role of proxy. Maybe we should give a modular design to support different backends.

Here are some straightforward questions:

  1. How to design the modularization?
  2. How to let KBS know the accessed resource is from a KMS not stored in the KBS? (By the resource URI? How?)

Integration Key generation, image encryption, Key registration into KBS

Background

Now to generate a encrypted image for CoCo is not easy, because we need to do the following steps manually:

  • Set up CoCo-Keyprovider
  • Generate a KEK manually
  • Use skopeo to encrypt an image using the KEK
  • Set up a KBS
  • Register the KEK into KBS

Although we can document this, it still makes trouble to newers.

What this issue wants?

I suggest integrate CoCo keyprovider into the "cluster of KBS" (in the docker-compose) and do some change. The changes include including key generation inside CoCo keyprovider, letting CoCo keyprovider automatically connect to the KBS to register the Key. then a user will do the things above as following

  • Set up a "KBS cluster" using docker-compose
  • Use skopeo to encrypt a image.

Much easier for a user, right?

What we need to do?

If we agree with this, here are the following work to be done:

cc @sameo @fitzthum @bpradipt @jialez0

Implement HTTPS server to support KBS protocol APIs.

Here we need to implement HTTPS server which support KBS protocol APIs defined in the protocol document.

Now reference-kbs has implemented an HTTPS server that supports KBS protocol APIs. It is located in the virTEE community and now can well support SEV pre-attestation. However, in this repo, we still need the code of the HTTPS server part, so that based on this, we can provide internal implementation designed for CoCo, better adapt our attestation-agent and attestation-service, and conduct CoCo runtime secret injection to support the needs of our image-rs and kata-agent.

Resume the development on this general KBS project

Essentially, this project is part of general attestation infrastructure design for CC community. See confidential-containers/confidential-containers#119 for details.

However, the progress of this project is not ongoing as expected and thus we still don't have a prototype yet. As a member of promoting the establishment of the general attestation infrastructure for CC community, I hope to seek a successor to resume the development of this general KBS project and make it available in a future CoCo release with attestation-service.

As part of the general attestation infrastructure for CC community, the attestation-service project has already had a prototype, and recently it integrates the Reference Values Provider Service (RVPS) module to retrieve the trusted reference values from supply chain / artifact provenances to accomplish a full functionality of attestation service, according to RATS architecture. In addition, this attestation service implements a verifier driver framework to support multiple verifier drivers for various TEEs. The last piece of puzzle is the message protocol between KBS and attestation-service. Obviously, this work requires a KBS to make it happen.

Sergio seems to be interested in this. He proposed the enhancements on the KBS message protocol between KBS and KBC. According to #7, Sergio has already had a reference KBS at https://github.com/virtee/reference-kbs.

So we can chose to resume KBS development, or alternately use Sergio’s initial implementation if he would like. Either would need to meet the following requirements:

  • Make the KBS implementation take place in this project in CoCo community.
  • Adapt the defined KBS message protocol.
  • KBS needs to interact with attestation-service for evidence verification and returning attestation results.

no Go source files always appear after trying build the project

Hello, i have problem when trying build the project

make && make install
   Compiling hyper-timeout v0.4.1
   Compiling in-toto v0.3.0 (https://github.com/Xynnn007/in-toto-rs.git?rev=7f69799#7f69799d)
   Compiling git2 v0.15.0
   Compiling tonic v0.8.3
   Compiling shadow-rs v0.19.0
   Compiling attestation-service v0.1.0 (/home/yaoxin/attestation-service)
error: failed to run custom build command for `attestation-service v0.1.0 (/home/yaoxin/attestation-service)`

Caused by:
  process didn't exit successfully: `/home/yaoxin/attestation-service/target/release/build/attestation-service-415e692c8a604cbc/build-script-build` (exit status: 1)
  --- stdout
  cargo:rerun-if-changed=/home/yaoxin/attestation-service/target/release/build/attestation-service-7acaabdf8a946b93/out
  cargo:rustc-link-search=native=/home/yaoxin/attestation-service/target/release/build/attestation-service-7acaabdf8a946b93/out
  cargo:rustc-link-lib=static=cgo

  --- stderr
  ERROR: build github.com/open-policy-agent/opa/capabilities: cannot load github.com/open-policy-agent/opa/capabilities: no Go source files

warning: build failed, waiting for other jobs to finish...
make: *** [Makefile:22: grpc-as] Error 101

The error above always appears. Could you help me to resolve it?

SEV-SNP policy definition

As mentioned in #215 and current state of AS server, we have defined policies for SGX and TDX evidence. If the AS client tries to make request to the sever for any of the RPC functions defined the server responds with TEE is not supported. We need to define policy for SEV-SNP. Opening this issue so that we can discuss what this policy should contain.

Steps

  • Build the AS cargo build --release
  • Run the attestation server ./target/release/attestation-server
  • On a different terminal make request to the client ./target/release/attestation-service-ctl --tee sevsnp policy get
Error: status: Aborted, message: "Get policy: TEE is not supported!", details: [], metadata: MetadataMap { headers: {"content-type": "application/grpc", "date": "Tue, 12 Jul 2022 07:25:39 GMT"} }

Screenshot from 2022-07-12 12-55-57

RVPS | A sample format for provenance message

For now we support in-toto as a type of provenance message to provide reference values. However we have not set up producer of the in-toto provenances up to now. We need a simple enough format of provenance for us to do the reference value input for v0.5.0 release.

We will talk more about how to generate in-toto provenance after the release in software build pipeline to make the story more robust.

[support] Fail to build after the merging of TDX verifier

with the merging of confidential-containers/attestation-service#49, I cannot build AS and KBS anymore.

Here is the error I hit after typing make build

 error: failed to run custom build command for `sgx-dcap-quoteverify-sys v0.1.0 (https://github.com/intel/SGXDataCenterAttestationPrimitives?rev=85cf8bdd#85cf8bdd)`

 Caused by:
  process didn't exit successfully: `/go/src/github.com/confidential-containers/attestation-service/target/release/build/sgx-dcap-quoteverify-sys-f2635bfdea888259/build-script-build` (exit status: 101)
  --- stdout
  cargo:rustc-link-lib=sgx_dcap_quoteverify
  cargo:rerun-if-changed=bindings.h

  --- stderr
  thread 'main' panicked at 'libclang error; possible causes include:
  - Invalid flag syntax
  - Unrecognized flags
  - Invalid flag arguments
  - File I/O errors
  - Host vs. target architecture mismatch
  If you encounter an error missing from this list, please file an issue or a PR!', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/bindgen-0.59.2/src/ir/context.rs:538:15
  stack backtrace:
     0:     0x55bbe2eb41ba - std::backtrace_rs::backtrace::libunwind::trace::h34aec3ef6cd8ad7e

I have installed the dependencies,

apt-get install -y libtdx-attest-dev libsgx-dcap-quote-verify-dev

what should I do? thanks!

Useless information provided to hardware-specific verifier

Thanks to discussion with @jiazhang0 @jialez0 .
When a remote attestation request is be verifying, AS crate will call the function of hardware-specific verifier, whose interface is like

async fn evaluate(
        &self,
        evidence: &Evidence,
        policy: Option<String>,
        reference_data: Option<String>,
    ) -> Result<AttestationResults>;

This means that the hardware-specific verifier will do the following 2 things:

  • verification of hardware-specific quote (TEE evidence)
  • verification of evidence with OPA policy

In fact, the second thing is not task for a hardware-specific verifier. Also, it will bring more code tasks for verifier developers to think about OPA policies.

A possible good way to deal this may be to refactor the signature of function evaluate in the trait for verifier, like

async fn evaluate(
        &self,
        reference_data: Option<String>,
    ) -> Result<Claims>;

Here Claims are strings that the quote brings, and this evaluate only do hardware-specific verification. There will be another function to process the Claims with policy due to user-customized OPA policy.

Another benefit of this refactor is easier to integrate RVPS into AS

[RVPS] In-toto Provenance Handler Development

As mentioned in #210, we aim to firstly add in-toto provenance handler into RVPS. But now the rust version of in-toto is not prepared. There are two options:

  • Use a rust-version wrapper of golang-version in-toto
  • Use a purely rust version, which will be new and probably not stable

Besides, both options above need to wait for the upstream issue to be done in-toto/in-toto-rs#15, because there is some dependency and compatibility problems to be fixed.

HTTPS integration is unusual

While the format and contents of the Challenge, Attestation and Response payloads look good to me, the integration of those payloads in the HTTPS schema is quite unusual, which would make its implementation more complex and, quite possibly, somewhat unreliable:

  • As @thomas-fossati already pointed out in #4, a GET with a payload is weird, and many web frameworks don't support it.
  • Using the same route /kbs/key/key_id for all payloads forces both the server and the client to first identify the nature of the payload, and then deserialize it. Most web frameworks are designed to support a single payload per route, otherwise requiring manual parsing of the body contents. This means more error-prone work on the shoulders of the devs implementing the protocol.
  • The use of a JSON web token for the purpose of identifying a previous successful authentication requests is significantly more complex than the usual cookie+session strategy. I understand this is done to make the KBS stateless, but in the end this brings more complexity to the server implementation than having an in-memory table of valid sessions and their cookies.
  • Explicit versioning on the route (i.e. /kbs/v0/key/key_id) is less error prone and more reliably than having it as a field on a JSON object.

I think something like this would be a more conventional approach (this example assumes a session/authentication is only valid in the context of a key_id):

  • A GET to /kbs/v0/key/<key_id> without a valid session_id cookie returns HTTP/401 Unauthorized.
  • A client can request a new authentication session by POSTing a Request payload to /kbs/v0/key/<key_id>/session. The server replies to this POST with a Challenge payload, possibly extended with a status and/or error field to cover cases in which the request couldn't be server for some reason (i.e., the TEE type not being supported by the KBS).
    • In case the server can process the request, it will also generate a new session entry, storing the parameters from Request on it, and returning an identifier referring to it to the client, as a cookie.
  • Using the cookie with the session id received from the server, the client can POST a Attestation payload to /kbs/v0/key/<key_id>/attest.
    • If the attestation succeeds, the server updates the corresponding in-memory session entry to mark it as authenticated.
    • The server replies with a payload containing a status and/or error field to inform the result of the operation.
  • If the attestation is successful, the client can issue again a GET to /kbs/v0/key/<key_id> and this time, having a cookie with a valid session_id, it will receive a Response payload.

Is there already software implementing the current KBS attestation protocol or are we still in time to introduce some changes to it?

Update protocol document

Based on the discussions at #6, update the protocol document and formalize the HTTP protocol with an openapi document.

Distribute KBS public key

Confidential Containers doesn't have a good way to pre-provision the public key of the KBS. I think we should add this to the KBS protocol so that the KBS can tell the KBC its public key. Obviously you don't have to trust this value if you have some a priori trusted channel (what?).

HTTPS Support is missing

The current KBS implementation only supports HTTP, but that should be an explicit opt-in selection instead.

HTTPS should be the default setting, and that requires the addition of 2 parameters to the CLI: --private-key and --certificate (or --cert-chain). Those 2 parameters should be mandatory unless the caller selects the --insecure-http-only parameter.

See #26.

CI error when compiling OPA

Hi, there is something wrong with CI when building


error[E0308]: mismatched types
Error:    --> src/policy_engine/opa/mod.rs:117:32
    |
117 |         let res = opa.evaluate(dummy_reference(5), dummy_input(5, 5));
    |                       -------- ^^^^^^^^^^^^^^^^^^ expected struct `HashMap`, found struct `std::string::String`
    |                       |
    |                       arguments to this function are incorrect
    |
    = note: expected struct `HashMap<std::string::String, Vec<std::string::String>>`
               found struct `std::string::String`
note: associated function defined here
   --> src/policy_engine/mod.rs:25:8
    |
25  |     fn evaluate(
    |        ^^^^^^^^

error[E0308]: mismatched types
Error:    --> src/policy_engine/opa/mod.rs:121:32
    |
121 |         let res = opa.evaluate(dummy_reference(5), dummy_input(0, 0));
    |                       -------- ^^^^^^^^^^^^^^^^^^ expected struct `HashMap`, found struct `std::string::String`
    |                       |
    |                       arguments to this function are incorrect
    |
    = note: expected struct `HashMap<std::string::String, Vec<std::string::String>>`
               found struct `std::string::String`
note: associated function defined here
   --> src/policy_engine/mod.rs:25:8
    |
25  |     fn evaluate(
    |        ^^^^^^^^

For more information about this error, try `rustc --explain E0308`.
error: could not compile `attestation-service` due to 2 previous errors
Error: warning: build failed, waiting for other jobs to finish...
Error: The process '/home/runner/.cargo/bin/cargo' failed with exit code 101

Implement the Attestation Sever application.

Background

According to General Attestation Service Design Proposal, the Attestation Service can be built and run as a single server application (The Attestation Service can be extended to add a separate service application which also relies on the upper library).

Problem

The current Attestation Service only implements the Crate in path: https://github.com/confidential-containers/attestation-service/tree/main/lib

Goal

Extend the Attestation Service to add a Attestation Server application into the path of: https://github.com/confidential-containers/attestation-service/tree/main/server

Failded to compile KBS

Hey, I got following error when I was trying to compile the project:

yaoxin@master:~/kbs$ make kbs
Makefile:7: warning: overriding recipe for target 'kbs'
Makefile:3: warning: ignoring old recipe for target 'kbs'
cargo build --no-default-features --features native-as
warning: /home/yaoxin/kbs/src/api_server/Cargo.toml: dependency (tempfile) specified without providing a local path, Git repository, or version to use. This will be considered an error in future versions
warning: /home/yaoxin/kbs/src/api_server/Cargo.toml: unused manifest key: dev-dependencies.tempfile.dev-workspace
warning: /home/yaoxin/kbs/Cargo.toml: unused manifest key: workspace.dev-dependencies
warning: skipping duplicate package `app` found at `/home/yaoxin/.cargo/git/checkouts/sgxdatacenterattestationprimitives-d6934a418e6beae0/85cf8bd/SampleCode/RustTDQuoteGenerationSample`
   Compiling attestation-service v0.1.0 (https://github.com/confidential-containers/attestation-service.git?rev=9484f06#9484f06f)
   Compiling actix-web-httpauth v0.8.0
error: failed to run custom build command for `attestation-service v0.1.0 (https://github.com/confidential-containers/attestation-service.git?rev=9484f06#9484f06f)`

Caused by:
  process didn't exit successfully: `/home/yaoxin/kbs/target/debug/build/attestation-service-05826db9cb0747e6/build-script-build` (exit status: 1)
  --- stdout
  cargo:rerun-if-changed=/home/yaoxin/kbs/target/debug/build/attestation-service-f8d89eef70b8cda2/out
  cargo:rustc-link-search=native=/home/yaoxin/kbs/target/debug/build/attestation-service-f8d89eef70b8cda2/out
  cargo:rustc-link-lib=static=cgo

  --- stderr
  ERROR: build github.com/open-policy-agent/opa/capabilities: cannot load github.com/open-policy-agent/opa/capabilities: no Go source files

warning: build failed, waiting for other jobs to finish...
make: *** [Makefile:7: kbs] Error 101

Could someone help me to resolve this error?

Many thanks

Integrate Attestation-Service to verify TEE evidence

Generic KBS needs to integrate Attestation-Service, and then KBS can forward TEE evidence to it, and verify according to the owner's configured policy. We can start this work after #16 is completed.

Attestation Service can be integrated by KBS in two forms:

  1. As a Rust library crate.
  2. As a gRPC server application.

We should provide both above two options in the future. The first one can be used by users who want the KBS deployment to be as simple as possible, but may discard some functions of AS (such as monitoring the upstream supply chain to update reference values at any time). The second type will provide more complete functions, but there will be some additional operations in the deployment.

Launch parameter for public key/certificate

We need to add a launch parameter for public key/certificate to enable HTTPS. Maybe we need a switch for HTTPS/HTTP. If HTTPS is enabled, a public key or a certificate is required.

Track oci2cw tool ideas on how to build images

Building images requires a separate step that generates the keys in a secure way, and then sends them to the KBS.
The oci2cw tool does this for confidential workloads.

Discussing with @slp earlier, he mentioned that he wanted to do a Rust version. During a later discussion at KVM Forum, he mentioned that there was a possibility this would be integrated into buildah instead.

See also build process overview
Building images

(source)

JWK public key format definition in KBS Protocol

Now we use JWK in a format as defined in https://github.com/confidential-containers/kbs/blob/main/docs/kbs_attestation_protocol.md#public-key

{
    "kty": "$key_type",
    "alg": "$key_algorithm",
    "k": "public_key"
}

However the JWK defined in RFC7517 https://www.rfc-editor.org/rfc/rfc7517# does not have the field k.

Now we use RSA pkcs1v15 padding / key length = 2048bit as the public algorithm. To follow the RFC it might look like the following?

{
    "kty": "RSA", // https://www.rfc-editor.org/rfc/rfc7518#section-6.1
    "alg": "RS256", // https://www.rfc-editor.org/rfc/rfc7518#section-3.1
    "n": "<base64-encoded n parameter in RSA public key>" // https://www.rfc-editor.org/rfc/rfc7518#section-6.3.1.1
    "e": "<base64-encoded e parameter in RSA public key>" // https://www.rfc-editor.org/rfc/rfc7518#section-6.3.1.2
}

although it is still not a problem as AA & AS logic are all controlled by us so it can pass CI, we should think about "normalize" the definition (including the key type selection) after v0.5.0? cc @sameo @jialez0

use standard data formats?

Three potential places where you could use existing standard formats instead of defining new ones:

  • Key material transported in JSON and JWT objects could be JWKs
  • In the response, the output+crypto-annotation combo could use JWS
  • For detailed error reporting, you could use the "problem details" format defined in RFC7807

One advantage is that there are tons of library code that can be readily used to pull together the service.

Another advantage is that a standard is usually the result of years of collective engineering experience, and tend to absorb already a lot of the "hard learned lessons" :-)

Quickstart script for KBS

To make the KBS deployment easier, it is expected to give a docker-compose or something like to help a user to quickly set up a series of services on the tenant side. Which I can help

SEV-SNP Attestation Driver

Here is a rough vision for how we can verify the SNP evidence.

  1. Parse evidence->attestation report to get CPU ID and TCB Info. Use this to get signed cert chain from KDS. Cache cert chain or future requests (not sure best way to do caching in AS driver)
  2. Verify signature of attestation report
  3. Compare report data to hash of public key and nonce in the evidence
  4. Extract all policy-related fields from evidence/attestation report (TCB Version, Kernel Hash for direct boot, cpu count, etc)
  5. Verify policy related fields using OPA and reference data
  6. If policy is valid, construct an expected launch measurement using kernel hash, cpu count, etc.
  7. Compare expected launch measurement to launch measurement in attestation report

I think that's it?

Support Token distribution and verification.

This is an enhancement and future work.

At present, we have designed this enhanced function in the KBS protocol: it can support clients to request Token resources, and KBS will sign and distribute a Token representing the identity of the client after the success of the Attestation. If the Token is brought with the subsequent resource Request and verified by KBS, there is no need to do Attestation again.

In addition, we should further extend the function of KBS to verify the token. In addition to verifying the token signed by KBS itself, we can also verify the token signed by the trusted third party Attestation Service trusted by users (for example, from Azure Attestation-Service, etc.).

It should be noted that we first need to implement a Session-Cookie based KBS. After its basic functions are available, the function described in this issue will be added as an additional option.

RVPS | Client tool

We need a client tool to debug RVPS. Concretely register reference values and query reference values

[RFC] Reference Value Generation

Background

Now different attesters in cc-KBC (like tdx, sev-snp) are being developed or perfected and related verifiers in Attestation-Service (tdx and sev-snp) are also under development.

The attesters help to collect hardware and software evidence in the guest TEE, and the verifiers help to verify the hardware evidence and parse the claims. If expected, the claims should include software evidence information. For example in TDX, it will contain the measurements reflecting kernel, kernel parameters and td-shim/guest firmware. Sev might not have fine-grained component evidences but still have a digest which we can rebuild with expected values of kernel, kernel parameter and guest firmware.

So for remote attestation, the next key problem to be handled is how to provide the reference values. This proposal aims to declare this problem and provide a solution. Please feel free to give any suggestions.

Reproducible build? Not now.

The best way to give reference values is by reproducible build. In this way a user can build their own binaries and calculate the reference values (digests) in a determined way. Here the determined way means directly shaXXXsum the binary or using specific tools like td-shim-tee-info-hash, etc. However, this proposal suggests to use another way instead of reproducible build, and leave reproducible build a future work. Here are the reasons:

  • reproducible build is still hard to implement, which our team are stuck in rootfs.
  • It is sometimes not straightforward and comfortable for users to build by themselves.
  • We can provide published artifacts together with related tools to calculate reference values.

We suggest CoCo community to provide a pipeline or process to build binaries, generate reference values and signatures if needed.
In this way, we might sacrifice:

  • Let the trust model be more tight (if in reproducible build way, users could only trust the codebase of the artifacts by code transparency, but now they must trust CoCo community)

But still, as a zero-to-one step, it would work as a temporary solution before we achieve reproducible build. And as a neutral community, CoCo is proper for this role.

Reference Value Publish Process

Here is the sketch of the artifact publish process. Let's take TDX for example, which we are doing experiments on and initially succeeded.

We recommend introducing the following release process in the community:

  1. CoCo community compiles customized guest os image, kernel, guest firmware(td-shim).
  2. CoCo community signs the compiled binaries using well-known software supply chain tools/services like sigstore, which many other communities caring about software supply chain security do (like constellation, kubernetes, etc), to provide a public a public, user verifiable supply chain metadata record service.
  3. CoCo publishes the binaries on every new release, or have a different release cycle.
  4. CoCo community calculates the reference values of the artifacts, organizes them into a well-defined json format and then publishes them into a new repo under Confidential-Containers, for example https://github.com/confidential-containers/provenances.
    • To calculate the reference value, we might use a tool to do the determined calculation. We also should publish and give detailed documents about how to use them. Then users can calculate and compare the results with the published references.

About the process, the most important thing is step 2. This step ensures the binaries are published by CoCo community, then the logic of calculating reference values and using the published reference values will make sense.

Reference Value Consume Process

Let's see how AA-AS-KBS architecture could consume the published reference values. Now, the componant RVPS in AS helps to parse the different type of provenances. In the VBDA' design, we want the role of the RVPS to be a subscriber from the publisher, s.t. the producer of the provenance, which is consistent with the model that some organizations or software vendors release software reference values ​​for CSPs.

To adapt what we've mentioned in Reference Value Publish Process to the publisher-subscriber model, we should add a little componant play the role of publisher. The componant will periodically request from the repo https://github.com/confidential-containers/provenances, and checks whether there is newly published reference values. If so, it will download them, and publish them to rvps.

Next, rvps will parse the reference values, and wait for Attestation-Service to query.

Work List

If community agrees on this proposal, to summary, there will be the things to do (the list might be modified and updated)

  • Determine the concrete list of entities that need to be measured in different artitecture, and provide a reference value calculation tools and related measurement logic for each entity. For example, the kernel in TDX, we will use Confidential Computing Event Log (CCEL) to measure and probably related tool to calculate reference value
  • Open an new repo, or decide a proper place to publish the reference values, binaries (if needed to publish binaries altogether for every binary release)
  • Decided the format of the reference values
  • Use a Community github account to help sign the binaries using sigstore tools (we will go to open another issue to decribe this)
  • Integrate the calculation tools, signing process to a binary publish CI to make all automatically and triggered manually.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.