Code Monkey home page Code Monkey logo

psa-api's Introduction

PSA Certified API Specifications

This is the official place for the latest documents of the PSA Certified API.

This GitHub repository contains:

  • Specification source files
  • Reference copies of the PSA Certified API header files
  • Examples of usage and implementation of the PSA Certified APIs
  • Discussions of updates to the specifications
  • Proposed changes to the specifications

Officially released specification documents can be found in the associated PSA Certified API website.

Specifications

The following specifications are part of the PSA Certified API.

Specification Published Document source Reference headers Dashboard
Crypto API 1.2.1 doc/crypto/ headers/crypto/1.2/ Project board
Secure Storage API 1.0.3 doc/storage/ headers/storage/1.0/ Project board
Attestation API 1.0.3 doc/attestation/ headers/attestation/1.0/ Project board
Firmware Update API 1.0.0 doc/fwu/ headers/fwu/1.0/ Project board
Status code API 1.0.3 doc/status-code/ headers/status-code/1.0/ Project board

Extensions

Extension specifications introduce new functionality that is not yet stable enough for inclusion in the main specification.

API Extension Published Document source Reference headers Dashboard
Crypto API PAKE 1.2 Final 1 doc/ext-pake/ headers/crypto/1.2/ Project board

Reference header files

Reference header files for each minor version of each API are provided in the headers/ folder.

Test Suite

Test suites are available to validate compliance of API implementations against the specifications for Crypto, Attestation, and Secure Storage APIs, from: github.com/ARM-software/psa-arch-tests

Compliance badges can be obtained from PSA Certified to showcase compatible products.

Example source code

Source code examples of both usage, and implementation, of the PSA Certified APIs are provided in the examples/ folder.

Related Projects

Known projects that implement or use the PSA Certified APIs are listed in related-projects.

License

Text and illustrations

Text and illustrations in this project are licensed under Creative Commons Attribution–Share Alike 4.0 International license (CC BY-SA 4.0).

Grant of patent license. Subject to the terms and conditions of this license (both the CC BY-SA 4.0 Public License and this Patent License), each Licensor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Licensed Material, where such license applies only to those patent claims licensable by such Licensor that are necessarily infringed by their contribution(s) alone or by combination of their contribution(s) with the Licensed Material to which such contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Licensed Material or a contribution incorporated within the Licensed Material constitutes direct or contributory patent infringement, then any licenses granted to You under this license for that Licensed Material shall terminate as of the date such litigation is filed.

The Arm trademarks featured here are registered trademarks or trademarks of Arm Limited (or its subsidiaries) in the US and/or elsewhere. All rights reserved. Please visit arm.com/company/policies/trademarks for more information about Arm's trademarks.

About the license

The language in the additional patent license is largely identical to that in section 3 of Apache License, Version 2.0 (Apache 2.0) with two exceptions:

  1. Changes are made related to the defined terms, to align those defined terms with the terminology in CC BY-SA 4.0 rather than Apache 2.0 (for example, changing "Work" to "Licensed Material").

  2. The scope of the defensive termination clause is changed from "any patent licenses granted to You" to "any licenses granted to You". This change is intended to help maintain a healthy ecosystem by providing additional protection to the community against patent litigation claims.

Source code

Source code samples in this project are licensed under the Apache License, Version 2.0 (the "License"); you may not use such samples except in compliance with the License. You may obtain a copy of the License at apache.org/licenses/LICENSE-2.0.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and limitations under the License.

Feedback

If you have questions or comments on any of the PSA Certified API specifications, or suggestions for enhancements, please raise a new issue.

Please indicate which specification the issue applies to. This can be done by:

  • Providing a link to the section of the specification on this website.
  • Providing the document name, full version, and section or page number in the PDF.

Contributing

Anyone may contribute to the PSA Certified API. Discussion of changes and enhancement happens in this repository's Issues and Pull requests. See CONTRIBUTING for details.


Copyright 2022-2024 Arm Limited and/or its affiliates

psa-api's People

Contributors

athoelke avatar gilles-peskine-arm avatar hannestschofenig avatar marcusjgstreets avatar mswarowsky avatar ndevillard avatar snandi76 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

psa-api's Issues

Interruptible/bounded-latency asymmetric operations

Use case

There are some systems in which there is a requirement for asymmetric cryptographic operations in execution contexts that cannot tolerate long-running functions. However, some asymmetric operations have long, very long, or even arbitrary execution times.

For example, these use cases can arise in constrained microcontrollers that have real-time response requirements, or in firmware that executes in highly-privileged execution modes.

Proposal

The long-duration asymmetric operations are typically composed of many repeated smaller steps. If the Crypto API exposed a mechanism for the application to perform the operation in a step-wise — 'interruptible' — manner, then the application can meet the bounded-latency requirement by repeatedly calling the API to make progress on the operation.

Mbed-TLS now has a prototype API for interruptible hash signature operations (sigh-hash and verify-hash), which effectively proposes a general pattern for this type of API.

The API is composed of the following building-blocks. For details of the proposal please see Mbed-TLS/mbedtls#6279.

PSA_OPERATION_INCOMPLETE (macro)

This is a non-error status code to indicate an incomplete interruptible operation. This is already defined in the Status code API.

ops

This is a term used to describe a 'unit of work' that can be carried out within an interruptible operation. The actual 'size' or time duration for one op is implementation- and function- specific, and can also be dependent on the algorithm inputs (such as the key size).

An application can set a global 'max ops' value, that limits the ops performed within any interruptible function. If an interruptible function does not complete before reaching this threshold it will return PSA_OPERATION_INCOMPLETE instead of an error or success status.

The current threshold can also be queried. When an interruptible operation completes, the application can also query the total number of ops required: this can allow an application to tune the threshold value it uses.

Interruptible operation object

An interruptible operation requires the management of state relating to the operation. This is held (or referenced, depending on implementation) in an implementation-defined object that this allocated by the application. The allocation and lifecycle of these objects follows a similar pattern to that used for multi-part operations in the existing API.

Interruptible operation objects must be initialized before first use, also like the multi-part operation objects.

Interruptible operation sequence

Each interruptible operation will have a set-up function to provide the input parameters. This is typically named psa_xxx_start(), for example, psa_sign_hash_start().

To progress the operation, a matching psa_xxx_complete() function is called repeatedly, providing any required output parameters:

  • If the operation finishes, the output is provided and PSA_SUCCESS is returned.
  • If the operation fails, then an appropriate error status is returned
  • If the operation has not completed yet, PSA_OPERATION_INCOMPLETE is returned. The caller should invoke the psa_xxx_complete() function again

On successful completion, the operation object is reset, but can still be queried to determine the number of ops required.

On error, the operation object enters an error state, and must be reset by a call to psa_xxx_abort(). The application can also call psa_xxx_abort() to cancel an in-progress interruptible operation.

Stray whitespace in the PSA status specification

According to the PSA Firmware Framework 1.0.0, to the master list predating PSA status as a separate specification, to TF-M and to Mbed TLS, the way to define a PSA status code is #define PSA_ERROR_FOO ((psa_status_t)-42), with no spaces inside the definition.

According to the PSA status code specification version 1.0 and the reference header psa/error.h, the way to define most PSA status codes is #define PSA_ERROR_FOO ((psa_status_t) -42), with a space before the minus sign. There are a few exceptions, I think the ones that were imported from crypto in a later batch.

According to the C99 specification (n1256 §6.10.3 constraint 1, emphasis mine):

Two replacement lists are identical if and only if the preprocessing tokens in both have the same number, ordering, spelling, and white-space separation, where all white-space separations are considered identical.

According to gcc -Werror or clang -Werror, the white-space separation bit does matter. Found when testing PSA crypto.

Conclusion: please make an erratum to the status code specification, removing the spaces before minus signs (and the 0 for PSA_SUCCESS). (I personally don't mind whether there's a space, but through historical happenstance, the PSA world has settled on no spaces.) And please note in the specification that this matters.

Ascon Algorithm Identifiers

ASCON-128, ASCON-128a, ASCON-HASH and ASCON-HASHA would fit into the PSA Certified API, with just the definition of some key types (for the AEAD) and algorithm Ids needed.

Are you going to define such constants for the PSA Certified API?

Ascon is a family of authenticated encryption and hashing algorithms designed to be lightweight and easy to implement, even with added countermeasures against side-channel attacks. Ascon has been selected as new standard for lightweight cryptography in the NIST Lightweight Cryptography competition (2019–2023). Ascon has also been selected as the primary choice for lightweight authenticated encryption in the final portfolio of the CAESAR competition (2014–2019).

(from https://ascon.iaik.tugraz.at/)

Use psa_key_derivation multipart operation for TLS 1.3 key

The psa_key_derivation multipart operation allows only one key derivation operation which make it not suitable for the TLS 1.3 key derivation schedule.
For example, mbedtls uses psa_raw_key_agreement and multiples HKDF_EXTRACT/HKDF_EXPAND operations. With those operations, the secrets are temporarily outside of the TrustZone.
I see there are some specificis support for TLS 1.2 in PSA. Do you plan to do the same for TLS 1.3 ?

Release Candidate *2* for Firmware Update API 1.0.0

Updated: Following RC1 review, RC2 adds clarifications for some of the API concepts, improved consistency, and included an additional threat in the SRA.

Since RC1, the differences can be found in PR #68 (commits following 3dc2fe8), and all the commits in PR #79.

PDF for review: IHI0093-PSA_Certified_Firmware_Update_API-1.0.0-rc2.pdf
PDF for review: IHI0093-PSA_Certified_Firmware_Update_API-1.0.0-rc1.pdf

This includes all merged and proposed changes since 1.0-beta, including #48, #49, #68, and #71.

The change history in Appendix E provides a summary of the changes, and links to the primary sections that are impacted. For a detailed diff of the changes, see the Files changed tab for the specific Pull Request.

Please provide feedback about anything that is unclear, confusing, missing, or incorrect; so we can finalize the document for publication.

PSA Firmware API FWU: support on flash nand with ECC

For platform having nand flash with ecc, the write access need to be aligned on a minimum size which fluctuates according to platform.
Unaligned access on this plaform can be supported the FWU update service on device , but it adds complexity in this service and requires an additional block in flash to support unaligned access.
Another solution is to add an implementation define: PSA_FWU_WRITE_SIZE_UNIT
On plaform without alignment constraint , the define is set to #define PSA_FWU_WRITE_SIZE_UNIT 1
for a plaform having 32 bytes aligment constraint, the value is set to #define PSA_FWU_WRITE_SIZE_UNIT 32
and the parameter image_offset and block_size from psa_fwu_write are no more in bytes but in number of PSA_FWU_WRITE_SIZE_UNIT.

NB: In Firmware_Update_API-1.0-bet.0, only one implementation define (PSA_FWU_MAX_WRITE_SIZE)

Incremental/partial initialization of a Crypto implementation

psa_crypto_init() is allowed to, and even encouraged to, initialize the RNG. This allows many implementations to provide a RNG that will never fail. (Implementations that comply to some security standards will still need to fail RNG calls if the entropy source fails, but a CSPRNG that has been seeded once can be good forever under reasonable security requirements.)

This is a problem on systems that don't have an entropy source available, or where the entropy source is not yet available at the time the application wants to call psa_crypto_init(). The typical scenario would be a device or an application that only wants to calculate hashes and verify signatures, for example a bootloader.

An application that only wants to calculate hashes might get away with insisting that the implementation accepts psa_hash_xxx() calls before psa_crypto_init(), although this is strict non-portable. An application that wants to verify signatures needs the keystore to be available, and it's less reasonable to require this to happen without psa_crypto_init(). And even hash drivers may need some initialization. And client-server implementations may need psa_crypto_init() to establish the communication between the client and the server.

There are use cases for doing a partial initialization of the PSA crypto subsystem: initialize basic functionality, initialize drivers, maybe initialize the key store, but do not initialize the RNG. This behavior is currently implementation-specific, but it makes sense to standardize it.

The following API definition is based on the current PR proposal for Mbed-TLS: Mbed-TLS/mbedtls#6636.

Proposed API

psa_crypto_init (function)

Amend the description to also refer to the fine-grained initialization control provided by psa_crypto_init_subsystem().

psa_crypto_subsystem_t (type)

The designation of a subsystem of the PSA Crypto implementation.

typedef uint32_t psa_crypto_subsystem_t;

Value of this type are masks of PSA_CRYPTO_SUBSYSTEM_xxx constants.

PSA_CRYPTO_SUBSYSTEM_COMMUNICATION (macro)

Crypto subsystem identifier for the communication with the server, if this is a client that communicates with a server where the key store is located.

#define PSA_CRYPTO_SUBSYSTEM_COMMUNICATION /* implementation-defined value */

In a client-server implementation, this subsystem is necessary before any API function other than library initialization, deinitialization and functions accessing local data structures such as key attributes.

In a library implementation, initializing this subsystem does nothing and succeeds.

PSA_CRYPTO_SUBSYSTEM_KEYS (macro)

Crypto subsystem identifier for the key store in memory.

#define PSA_CRYPTO_SUBSYSTEM_KEYS /* implementation-defined value */

Initializing this subsystem allows creating, accessing and destroying volatile keys in the default location, i.e. keys with the lifetime PSA_KEY_LIFETIME_VOLATILE.

Persistent keys also require PSA_CRYPTO_SUBSYSTEM_STORAGE. Keys in other locations also require PSA_CRYPTO_SUBSYSTEM_SECURE_ELEMENTS.

PSA_CRYPTO_SUBSYSTEM_STORAGE (macro)

Crypto subsystem identifier for access to keys in storage.

#define PSA_CRYPTO_SUBSYSTEM_STORAGE /* implementation-defined value */

Initializing this subsystem as well as PSA_CRYPTO_SUBSYSTEM_KEYS allows creating, accessing and destroying persistent keys.

Persistent keys in secure elements also require PSA_CRYPTO_SUBSYSTEM_SECURE_ELEMENTS.

PSA_CRYPTO_SUBSYSTEM_ACCELERATORS (macro)

Crypto subsystem identifier for accelerator drivers.

#define PSA_CRYPTO_SUBSYSTEM_ACCELERATORS /* implementation-defined value */

Initializing this subsystem calls the initialization entry points of all registered accelerator drivers.

Initializing this subsystem allows cryptographic operations that are implemented via an accelerator driver.

PSA_CRYPTO_SUBSYSTEM_SECURE_ELEMENTS (macro)

Crypto subsystem identifier for secure element drivers.

#define PSA_CRYPTO_SUBSYSTEM_SECURE_ELEMENTS /* implementation-defined value */

Initializing this subsystem calls the initialization entry points of all registered secure element drivers.

Initializing this subsystem as well as PSA_CRYPTO_SUBSYSTEM_KEYS allows creating, accessing and destroying keys in a secure element (i.e. keys whose location is not PSA_KEY_LOCATION_LOCAL_STORAGE).

PSA_CRYPTO_SUBSYSTEM_RANDOM (macro)

Crypto subsystem identifier for the random generator.

#define PSA_CRYPTO_SUBSYSTEM_RANDOM /* implementation-defined value */

Initializing this subsystem initializes all registered entropy drivers and accesses the registered entropy sources.

Initializing this subsystem is necessary for psa_generate_random(), psa_generate_key(), as well as some operations using private or secret keys. Only the following operations are guaranteed not to require this subsystem:

  • hash operations;
  • signature verification operations.
Note
Currently, symmetric decryption (authenticated or not) and MAC operations do not require the random generator. This may change in future versions of the library or when the operations are performed by a driver.

PSA_CRYPTO_SUBSYSTEM_BUILTIN_KEYS (macro)

Crypto subsystem identifier for access to built-in keys.

#define PSA_CRYPTO_SUBSYSTEM_BUILTIN_KEYS /* implementation-defined value */

Initializing this subsystem as well as PSA_CRYPTO_SUBSYSTEM_KEYS allows access to built-in keys.

psa_crypto_init_subsystem (function)

Partial library initialization.

psa_status_t psa_crypto_init_subsystem(psa_crypto_subsystem_t subsystem);
Parameters
subsystem The subsystem, or set of subsystems, to initialize. This must be one of the PSA_CRYPTO_SUBSYSTEM_xxx values, or a bitwise-or of them.
Returns: psa_status_t
PSA_SUCCESS
PSA_ERROR_INSUFFICIENT_MEMORY
PSA_ERROR_INSUFFICIENT_STORAGE
PSA_ERROR_COMMUNICATION_FAILURE
PSA_ERROR_HARDWARE_FAILURE
PSA_ERROR_CORRUPTION_DETECTED
PSA_ERROR_INSUFFICIENT_ENTROPY
PSA_ERROR_STORAGE_FAILURE
PSA_ERROR_DATA_INVALID
PSA_ERROR_DATA_CORRUPT

Description

Applications may call this function on the same subsystem more than once. Once a call succeeds, subsequent calls with the same subsystem are guaranteed to succeed.

Initializing a subsystem may initialize other subsystems if the implementations needs them internally. For example, in a typical client-server implementation, PSA_CRYPTO_SUBSYSTEM_COMMUNICATION is required for all other subsystems, and therefore initializing any other subsystem also initializes PSA_CRYPTO_SUBSYSTEM_COMMUNICATION.

Calling psa_crypto_init() is equivalent to calling psa_crypto_init_subsystem() on all the available subsystems.

Note
You can initialize multiple subsystems in the same call by passing a bitwise-or of PSA_CRYPTO_SUBSYSTEM_xxx values. If the initialization of one subsystem fails, it is unspecified whether other requested subsystems are initialized or not.

Provide better description of multi-component updates

The API and programming provide the capability of atomic update of multiple components, but there is no single section that clearly describes this feature, how to use it, what the limitations are (e.g. impossible to atomically update components with different installation behavior), and the flexibility for implementation constraints.

New algorithm: Zigbee's block-cipher based hash (AES-MMO)

Zigbee defines a cryptographic hash based on a block-cipher using the Matyas-Meyer-Oseas (MMO) construction. See Zigbee Specification r21 §B.6.

The MMO construction is general, requiring:

  • a block cipher that has a key length equal to the block size
  • a padding operation on the input message to align with the block size
  • an IV (or salt) of block-size length

Zigbee specifies all of these details for its hash function based on MMO:

  • AES-128 is used as the block cipher
  • The padding operation is similar to that used for MD and SHA hashes to prevent length-extension attacks, and permitting messages of less than 232 bits in length
  • The IV is set to 0 (all bits zero)

The existing Crypto API for hash algorithms is not parameterized or salted. So supporting this use case as a Crypto API hash algorithm requires a new hash algorithm identifier for the Zigbee-specified hash function.

PAKE SIZE macros need more arguments

This issue has been replicated from a posting to the [email protected] mailing list, originally submitted by Oberon.

The PAKE input and output size macros are defined with the argument list (alg, primitive, step). However, some input/output values depend on the digest size of the selected hash algorithm. This holds for instance for a SPAKE2+ confirmation key or the SRP client/server proofs.

It is therefore mandatory to include either the hash algorithm or the hash size in the argument list of the two macros.

Issues with `PSA_STORAGE_FLAG_WRITE_ONCE`

There are a couple of concerns with the PSA_STORAGE_FLAG_WRITE_ONCE data item flag, which have led me to wonder if this should be deprecated in a future version of the API.

Is there a good use case?

What is the security benefit of using PSA_STORAGE_FLAG_WRITE_ONCE. What is the use case for which this is the answer? — does it provided any benefit over "stops me accidentally erasing data that I must not delete"?

The API itself guarantees that all of your stored data cannot be deleted by anyone other than the application/client that created it. So this flag only stops you from deleting your own data.

Optional, but undiscoverable?

The specification hints that this flag might not be supported, in §2.3 The Protected Storage API:

However, it MUST treat the PSA_STORAGE_FLAG_WRITE_ONCE flag as definitive if it is supported.

but in §2.4 The Internal TrustedStorage API:

However, it must honor the PSA_STORAGE_FLAG_WRITE_ONCE flag.

However, the specification does not indicate exactly how the caller can determine if this flag is supported or not. It is not clear if the implementation will respond to psa_its_set() with PSA_ERROR_NOT_SUPPORTED,or by not setting the PSA_STORAGE_FLAG_WRITE_ONCE flag in the item info returned by psa_its_get_info():

in §2.3 again:

When reporting meta data, psa_ps_get_info() should report the actual protection level applied and not the requested level.

in §5.3.3. psa_its_set:

PSA_ERROR_NOT_SUPPORTED: The operation failed because one or more of the flags provided in create_flags is not supported or is not valid.

Hard to test

When an implementation supports this flag, then a test case will create undeletable data items. This will prevent the test case from running again, or results in slowly filling up storage, unless the product provides a mechanism to delete the write-once data items. Such a mechanism defeats the purpose of this API flag

Firmware Update attributes: global or per-component?

In the 1.0-beta API, some aspects of the API behavior are global, for example PSA_FWU_MAX_WRITE_SIZE, and some are per-component, for example PSA_FWU_FLAG_VOLATILE_STAGING.

The current spec implicitly permits an implementation to have different variations of the state model for different components (e.g. some might require a reboot to install, and others might not), although this is not explicitly stated.

Is there a reason for the attributes related to Firmware Update (such as these) to be global, instead of per-component, or per-component instead of global? - it seems we should have a good reason why some are global and others are not. There are some new attributes that might be in v1.0.0 - arising from #5, #8, and #9.

Currently, the per-component attributes are exposed via the component info structure, returned by psa_fwu_query() - and are runtime attributes. The global attribute is a build-time-constant macro. Macro definitions are clearly better for allocating buffers, and can reduce code footprint, but are less flexible for both implementations and clients.

Missing support for ‘out of band’ setup calculations in PAKE API

This issue has been replicated from a posting to the [email protected] mailing list, originally submitted by Oberon.

The PAKE interface deliberately does not contain functions for setup calculation not directly involved in the PAKE protocol. This might be a good idea to keep things simple and easy to use. However, there must be a way to do calculations like the password hash somehow. Otherwise the whole PAKE interface turns out to be useless in many situations. In most PAKE protocols a password hash or password verifier is not just a hash but needs field or group calculations related to the main PAKE operation. To allow an implementation of all parts of a PAKE key exchange without relying on a second crypto library, it is mandatory to provide these functions somehow in PSA, in special PAKE functions or in another interface.

This is highly protocol dependent. For SPAKE2+ for instance, the password hash consists of the values w0 and w1 on client side and w0 and L on server side. w0 and w1 are derived using PBKDF2 but in addition they must be reduced modulo the DH/EC group order. L is the group scalar product of w1 and the group base element. The reduction and the scalar multiplication are missing in the PSA API.
Lacking these operations, we could use the raw password as input instead of the derived values but this would compromise security, at least on the server side where no raw passwords should be kept.

A common approach is to expect the unreduced PBKDF2 values on client side but provide separate function for w0 and L to get the password hash on server side. This interface is used for instance in the Matter crypto PAL.

This issue is part of the discussion relating to the addition of support for SPAKE2+ in #73.

New algorithm: Zigbee's encryption and authentication block-cipher mode (CCM*)

Zigbee defines a block-cipher mode, CCM*, that is derived from the CCM mode used for authenticated encryption with additional data (AEAD). See Zigbee Specification r21 §B.1.1 and §A.

CCM is already supported by the Crypto API: see PSA_ALG_CCM.

CCM* differs from CCM in that it additionally supports a construction of CCM which only provides encryption, i.e. authentication tag length M = 0, while retaining the CCM construction of the encryption stream from the nonce, configuration parameter L, and CTR mode.

Zigbee constrains the use of CCM* as follows:

  • the block-cipher is AES-128
  • the message-length encoding parameter, L = 2, limiting the plaintext length to < 216
  • the tag length M must be 0, 4, 8 or 16

Further, the construction of the 13 byte nonce is specified precisely, with the final byte encoding the value of M that is used (as part of the packet's 'security level'). See §4.5.2.2 in the Zigbee specification.

Zigbee's packet security can also optionally encrypt the payload. This is achieved by presenting the payload as part of the additional data for CCM* when encryption is not indicated by the security level, or as the plaintext for CCM* when encryption is required. See §4.4.1.1 and §4.4.1.2.

Define API for Module-Lattice-Based Digital Signature (ML-DSA aka CRYSTALS-Dilithium)

NIST has now published a draft specification for a Digital Signature algorithm derived from CRYSTALS-Dilithium. The algorithm is designated ML-DSA, and will be published as FIPS 204. The draft (open for review until 22 Nov 2023) can be downloaded from https://csrc.nist.gov/pubs/fips/204/ipd.

NIST is planning to standardize three parameterized variants of ML-DSA: ML-DSA-44, ML-DSA-65, ML-DSA-87, which provide increasing levels of security for increasing computation and size of keys and signature values.

The signature algorithm is recommended to use fresh randomness, to mitigate side-channel attacks; but can also operate deterministically when the implementation does not have access to randomness, by substituting a zero value in place of the random value.

The Crypto API should define an API for using these algorithms.

Ambiguity in ITS requirements

[This observation resulted from a misreading of the spec]

Requirements 1 and 2 in §3.2 Internal Trusted Storage Requirements explicitly state the requirement for confidentiality and tamper-resistance against physical and software attack:

  1. The storage underlying the Internal Trusted Storage Service MUST be protected from read and modification by attackers with physical access to the device.
  2. The storage underlying the Internal Trusted Storage Service MUST be protected from direct read or write access from software partitions outside of the Platform Root of Trust.

The optionality that is expressed by requirements 4 and 5 in §3.2 is specifically about the use of cryptography to provide or enhance the protection required by 1 and 2:

  1. The Internal Trusted Storage Service MAY provide confidentiality using cryptographic ciphers.
  2. The Internal Trusted Storage Service MAY provide integrity protection using cryptographic Message Authentication Codes (MAC) or signatures.

However, when read on their own, these requirements can be ambiguous:

  • They are intended to be read as “cryptographic ciphers can be used to provide confidentiality, if the attacker can access the non-volatile memory, or to enhance the protection of the application data”
  • They can be interpreted as "cryptographic ciphers can be used to provide optional confidentiality”

It would be valuable to reword these two requirements to be clear, even when read without reference to requirements 1 and 2.

APIs for extracting the shared secret from PAKE operations

In the current beta PAKE API, we have psa_pake_get_implicit_key() which extracts the [unconfirmed] shared secret result into a single KDF operation object. With the introduction of SPAKE2+ (#73) which include confirmation of the secret in the protocol, and issues with the usability of the current API for some use cases (#86), it seems worth evaluating the design of this part of the API.

I also have concerns about the naming of this API, and by extension, the proposed psa_pake_get_explicit_key() in #73. To explain the current names, the qualifiers 'implicit' and 'explicit' are a indicator as to the authenticity of the key:

  • The 'implicit key' is not authenticated. If the other party has used a different password, then the resulting shared secret is not common, and attempted communication using the 'implicit key' will fail - but detection of that failure depends on details of the following exchanges.
  • An 'explicit key' is the result of a protocol that explicitly authenticates/confirms the shared secret from the exchange, before returning a value derived from the shared secret. Thus the protocol will not provide a shared value for use by the application unless it is confirmed that both parties have used the same password.

1. Flexible usage of the shared secret

Issue #86 suggests that we need an API that extracts the shared secret into a new key, similar in a way to how psa_key_derivation_output_key() can construct a key from a KDF. For full flexibility, this API could permit the partial use of the PAKE output, enabling the output to be used to construct a pair of keys. For example, a PAKE that outputs 256 bits could be split into two 128-bit AES keys by calling this output function twice.

If we provide such an API, how important is it to continue to provide a 'extract secret into key derivation operation' function as well? - this use case can be achieved by creating a volatile derivation key from the PAKE, inputting that key to a key derivation operation, and destroying the key [after the operation]. The case for providing the 'direct injection' API for JPAKE, was that the unconfirmed/implicit shared secret is not good for use as a pseudo-random key directly, due to bias in the output value - forcing the application to pass this through a KDF seemed like a good thing.

Would we ever need to permit an application to easily extract the shared secret data (i.e. without constructing an exportable key to do this)?

2. Separating unconfirmed outputs from confirmed outputs

I think it is unlikely that we would want a to expose the unconfirmed shared secret from a PAKE algorithm that included a confirmation phase in the protocol. For such protocols, the shared secret would only be available after confirmation. Are there any use cases that contradict such an approach?

If a protocol will either output an unconfirmed secret OR a confirmed secret, do we need two different functions (or function sets - see (1))? - the case to have different names is that it makes the situation clearer in the application: "does the application need to exchange some confirmation values prior to using the key for communication?" However, most application usage of PAKE algorithms is following higher level system specifications (e.g. Matter), which will define how to use the PAKE, key scheduling, and any confirmation protocol - so the hint provided by the API name has limited effect on mitigating the risk of a developer using an unconfirmed PAKE output.

So we could just have the singular psa_pake_get_shared_key() (or something similar), which will return whatever the protocol outputs. Developers need to check the algorithm/protocol documentation to determine if they need to do further confirmation.

If we want to retain separate APIs, so it is easier to see in the code what is being returned, I would like to propose that we use something like psa_pake_get_unconfirmed_key() and psa_pake_get_confirmed_key(), instead of the 'implicit' and 'explicit' qualifiers in the current API and proposal.

Use the PSA attestation token format in the Attestation API

The attestation token format is currently being standardized as the PSA Attestation token in the draft datatracker.ietf.org/doc/draft-tschofenig-rats-psa-token [PSATOKEN] specification.

This has evolved slightly since the v1.0 specification, and now uses allocated claim ids, instead of claim ids from the private use range. The token format described in the v1.0 API should be deprecated in favor of the emerging standard, and the Attestation API needs to be updated with a new version to reference the new format.

There are some open issues and options with how this progresses:

  • An implementation that produces a token compliant with [PSATOKEN] is not compliant with the v1.0 Attestation API, so this would probably require an updated specification to be v2.0. But the API is unchanged, so perhaps v1.1 is appropriate?
  • Should the legacy v1.0 format be retained in the new version of the specification, perhaps as an appendix, or just removed entirely? It will still be documented in the v1.0 specification.
  • Should the new version of the specification provide the same level of detail about the token format as the current specification, provide no details and just refer to the IETF document, or some in-between level of description?
  • How should the Attestation API update and IETF document timeline work:
    • Should the new version wait until the IETF document is finalised?
    • Should a beta version of a new Attestation API be published, until the IETF document is finalised?
    • Should a final version of a new Attestation API be published, noting that the IETF document is not yet final?

Clarify expected behavior when a component has transient staging

The initial implementation of the 1.0 API in TF-M v1.7 will not support persistent staging. Images being prepared for update (i.e. components in WRITING or CANDIDATE state) are not preserved when the system restarts. Support for this is planned in a later version of TF-M

This issue was identified as part of the TF-M implementation of the 1.0-beta API. The implementation patch and review is: https://review.trustedfirmware.org/c/TF-M/trusted-firmware-m/+/17427.

Flexibility in the persistent/transient nature of the WRITING and CANDIDATE states is supported by the v1.0 API, but the specification provides no details on the expected impact on the state model, and implementation behavior.

Proposal

Extend Appendix C Variation in system design parameters to cover this use case.

Review other specification text to ensure that it is consistent with this use case.

PSA key audit API: never-exposable property for keys

Add an API to indicate if a key can be guaranteed to have never been exposed outside of its security boundary. This is required to enable PSA key attestation.

The current Crypto API does have PSA_KEY_USAGE_EXPORT as part of the key usage policy. However, a key which has this policy clear does not indicate that the key value has not been exposed. For example,

  • A key can be imported with psa_import_key() and the new key does not permit export.
  • A key that has PSA_KEY_USAGE_EXPORT policy, is copied to a new key that does not.

A new API is required to identify if the key has never been exposed outside of the security domain in which it was created.

Mbed-TLS/mbedtls#6377 is a pull-request for Mbed-TLS that provides a proposed API (and implementation) for this feature.

The current definition of the API in the PR is:

PSA_KEY_AUDIT_FLAG_NEVER_EXPORTED (macro)

Audit flag indicating that the key material was never and will never be exposed in plaintext form outside the security boundary of its location.

#define PSA_KEY_AUDIT_FLAG_NEVER_EXPORTED ((psa_key_audit_flags_t) 0x00000001u)

This flag should be set if all of the following conditions are met:

  • The key material was generated randomly with psa_generate_key().
  • The key has never had the flag PSA_KEY_USAGE_EXPORT.
  • If the key can be exposed outside its security boundary in wrapped form, the implementation guarantees that the wrapping key itself cannot be exposed.
  • If the key was created by copying another key, these properties also apply to the original key.

This flag must not be set in any of the following cases:

  • The key was created by import.
  • The key, or a copy of it, was exportable in the past (unless the implementation can guarantee that it was never exported).
  • The key, or a copy of it, is currently exportable.
  • The key was created by derivation, unless an implementation-specific policy on the secret from which the key was derived prevents the same key material from being derived again with an exportable policy.
  • The key is located in a secure element, and it or a copy of it may have been present outside that secure element, even if it could not escape the security boundary of the Crypto API implementation.

psa_key_audit_flags_t (type)

Key audit information in the form of a flag mask.

typedef uint32_t psa_key_audit_flags_t;

A value of this type is a mask (bitwise-or) of PSA_KEY_AUDIT_FLAG_xxx values.

A flag is set in the audit flag mask only if the implementation can
guarantee that the corresponding security property was always true.
If this is not possible, the implementation must leave the flag unset.
Implementations should document which audit flags they support and
any applicable limitations.

psa_get_key_audit_flags (function)

Retrieve the audit information flags for a key.

psa_status_t psa_get_key_audit_flags(psa_key_id_t key,
                                     psa_key_audit_flags_t *audit_flags);
Parameters
key The key to query.
audit_flags On success, the key's audit information flags.
Returns: psa_status_t
PSA_SUCCESS Success. audit_flags contains the key's audit information flags.
PSA_ERROR_INVALID_HANDLE key does not exist.
PSA_ERROR_INSUFFICIENT_MEMORY
PSA_ERROR_COMMUNICATION_FAILURE
PSA_ERROR_CORRUPTION_DETECTED
PSA_ERROR_STORAGE_FAILURE
PSA_ERROR_DATA_CORRUPT
PSA_ERROR_DATA_INVALID
PSA_ERROR_BAD_STATE The library has not been previously initialized by psa_crypto_init()

Officially state that enum-like types treat 0 as unspecified/invalid

A minor design decision in the PSA Crypto API is that in enum-like types, the value 0 is reserved to mean “unspecified or invalid”, unless there is a good reason not to. (An example of a good reason not to is psa_status_t, where 0 means success.) This is explicitly specified for some types (e.g. psa_key_type_t, psa_algorithm_t) but not for others (e.g. psa_ecc_family_t, psa_dh_family_t).

We should specify this for all the enum-like types, and add a note somewhere in the API conventions section.

For psa_key_derivation_step_t, the specification does not define numerical values (because these values do not typically end up in persistent storage). However I think the specification should still recommend to leave 0 as undefined. (Making this compulsory could break existing compliant implementations.)

(Prompted by Mbed-TLS/mbedtls#7533 (comment))

Examples missing

Hi,
thanks for the publication of psa-api.
Unfortunately, the mentioned example section is still empty.

Signer ID required in IAT by PSA-SM?

@athoelke the documentation in this repo states that the signer ID of a claimed software component in the IAT does not need to be present in the token, though adhering to the PSA-SM would require that field being present. Since the PSA-SM doesn't have that many implementation specs, that phrase was surprising to us. Could you elaborate on where in the PSA-SM this requirement is presented?

https://github.com/ARM-software/psa-api/blame/5f2148fa55c61dc64abbe63eba16057fdd2338fe/doc/attestation/overview/report.rst#L140-L147

Add a Security Risk Assessment to the Secure Storage API specification

When initially written, the Secure Storage API considered that an attacker with physical access was within the scope of the security design.

Although this is a good assumption to make with regards to designing an API that will enable an implementation, and user, of the API to mitigate that type of threat in a product; it is not necessarily a requirement on every implementation of the API. In products for which the product security requirements do not include an attacker with physical access (for example, a product for which PSA Certified Level 2 provide sufficient security), the security goals of this API can be met with different mitigations.

This API is designed to enable 'scalable security' across different products, but with a common API. It would be helpful for the specification to separate the promises that the API makes to the application (e.g. which attributes of the data assets are protected from which actors), from the possible implementation approaches that can mitigate the threats against those assets.

The Implementation mitigations - such as use of cryptography, tamper resistance, or physical isolation - should be selected based on the adversarial model/attacker capabilities that are specified for the implementation design.

A Security Risk Assessment as an appendix could provide a structured approach to identifying aspects of the API design, implementation design, or application usage that are required to mitigate threats that are relevant to specific attacker capabilities. This would then identify the API security requirements and the implementation objectives (replacing the current Requirements chapter), and also identify where there is threat remediation that relies on application usage of the API.

Key agreement interface may present the shared key material out of the secured space.

psa_raw_key_agreement and psa_key_derivation_key_agreement use the pointer to key to store the result of the key exchange. In the application where the Diffie-Hellman key exchange are used, the result of the DH (regardless it is raw or derived) can be used for another cryptographic service like the block cipher or MAC. In my opinion, it is more consistent to use the psa_key_id_t * key as a parameter to drain the result of the DH operation.

Edit: typos and grammar

Add API to support device lifecycle change

In order to enforce Secure Element state, a device can support different lifecycle, device supporting such mechanism requires a user request to switch its lifecycle state when it’s ready to restrict Secure Element features/functionalities.

The PSA Attestation API defines the notion of “Security Lifecycle” in the initial attestation token report but there is no PSA API allowing to switch the “token’s Security Lifecycle”.

  • One use case could be the device provisioning before switching in functional state:
    A device in PSA_LIFECYCLE_PSA_ROT_PROVISIONING lifecycle state allows to provision assets that are usable or not in PSA_LIFECYCLE_PSA_ROT_PROVISIONING state. Assets to provision could be injected in a wrap format and only unwrapped-able by the Secure Element. Assets could be certificates, keys, passwords, binaries, …
    Then when all assets are correctly provisioned, the device can be switched in PSA_LIFECYCLE_SECURED lifecycle state.
    In PSA_LIFECYCLE_SECURED lifecycle state, assets provisioned becomes genuine and usable in the Secure Element. Indeed because the device is in protected lifecycle, assets are in protected Secure Element and could not be replaced if desired.

Clarify the requirements for the hash parameter in calls to `psa_sign_hash()` and `psa_verify_hash()`

The specification of psa_sign_hash() and psa_verify_hash() does not place requirements to check that the input hash is valid for the signature algorithm.

Partly, this is due to the use in some protocols of separating the 'hash' step from the signing step; and where the input to the signing step is not the output of a cryptographic hash operation.

We propose to tighten the specification of these functions, to recommend that the implementation makes more effort to validate the input for algorithms where this is meaningful:

  • Ensure that the format of the hash parameter is defined for all algorithms that can be used with these functions
  • Add 'hash is not a valid input value for the signature algorithm alg' to the list of conditions that can result in a PSA_ERROR_INVALID_ARGUMENT return code for these functions
  • Explicitly require that hash_length must be equal to the output length of the associated hash function, for signature algorithms where the input to the signature is a hash.

Add APIs to support wrapped keys

Many secure elements and crypto accelerators require the use of wrapped keys and will not accept importing clear-text keys to their key store. The existing psa_import_key function could be augmented to support wrapped keys, but that puts the burden of identifying wrapping and associated parameters on the underlying implementation. A new function should be proposed to support wrapped key imports.
As there is no standard for key wrapping data formats and associated algorithms, this should be made generic enough to be adaptable to any such key stores.
Same need for exporting wrapped keys.

Define API for Stateless Hash-Based Digital Signature (SLH-DSA aka SPHINCS+)

NIST has now published a draft specification for a Digital Signature algorithm based on SPHINCS+. The algorithm is designated SLH-DSA, and will be published as FIPS 205. The draft (open for review until 22 Nov 2023) can be downloaded from https://csrc.nist.gov/pubs/fips/205/ipd.

NIST is planning to standardize 12 parameterized variants of SLH-DSA, based on three independent parameters:

  • Two hash function families: SHA2 or SHAKE
  • Three security strengths: 128, 192, and 256 bits
  • Two modes: [relatively] small signature ('s'), and [relatively] fast signature ('f')

The qualified names are SLH-DSA-<hash-family>-<strength><mode>, for example SLH-DSA-SHAKE-128s. Table 1 in the draft describes the details of the variants.

The signature algorithm is recommended to use fresh randomness, to mitigate side-channel attacks; but can also operate deterministically when the implementation does not have access to randomness, by substituting a pre-generated value from the private key in place of the random value.

As SLH-DSA makes two passes of the message for the signature algorithm, FIPS 205 also explicitly describes a pre-hash variation of SLH-DSA where the hash of the message is signed/verified, instead of the message itself.

The Crypto API should define an API for using this algorithm.

It is impossible to derive multiple keys from the common secret in the PAKE API

This issue has been replicated from a posting to the [email protected] mailing list, originally submitted by Oberon.

The suggested PAKE interface passes the resulting raw common secret directly to a key derivation operation. This is good to avoid direct usage of the secret but it makes it impossible to derive more than one key from the secret because the key derivation operation can only be used once. Deriving multiple keys is mandatory for some protocols for instance to get a session key for the rest of the pairing phase and in addition a long term key to establish further sessions.

For example, HomeKit derives several temporary keys as well as the long term secret key from the SRP common secret. A single key derivation operation cannot be used because the separate keys are derived with separate salt and info values.

It would be better to return the raw secret as a key which can only be used as input to a key derivation operation. This would be easy to use and allows for multiple derived keys.

The same problem exists for the key agreement interface which is also directly coupled to a key derivation operation. Again there exists protocols where multiple keys are needed, for instance one for outgoing and one for incoming messages. In the case of key agreement, the psa_raw_key_agreement() function can often be used as a workaround but returning a derivation only key instead of the coupling to a key derivation operation would be a more general and cleaner solution. [This related issue has also been raised as #85]

New algorithm: Ad-hoc KDF for EC J-PAKE in TLS 1.2

Add API elements for the algorithm and supporting macros for the KDF used with EC J-PAKE in the TLS 1.2.

This has already been included in the development branch of Mbed-TLS (see Mbed-TLS/mbedtls#6115), following review with the Crypto API authors.

Todo:

  • Add and document PSA_ALG_TLS12_ECJPAKE_TO_PMS and PSA_TLS12_ECJPAKE_TO_PMS_DATA_SIZE API elements to the Crypto API
  • Add encoding for PSA_ALG_TLS12_ECJPAKE_TO_PMS to Appendix B
  • [optionally] Provide a code snippet to demonstrate its usage

The relevant Mbed-TLS changes are as follows:

/* The TLS 1.2 ECJPAKE-to-PMS KDF. It takes the shared secret K (an EC point
 * in case of EC J-PAKE) and calculates SHA256(K.X) that the rest of TLS 1.2
 * will use to derive the session secret, as defined by step 2 of
 * https://datatracker.ietf.org/doc/html/draft-cragie-tls-ecjpake-01#section-8.7.
 * Uses PSA_ALG_SHA_256.
 * This function takes a single input:
 * #PSA_KEY_DERIVATION_INPUT_SECRET is the shared secret K from EC J-PAKE.
 * The only supported curve is secp256r1 (the 256-bit curve in
 * #PSA_ECC_FAMILY_SECP_R1), so the input must be exactly 65 bytes.
 * The output has to be read as a single chunk of 32 bytes, defined as
 * PSA_TLS12_ECJPAKE_TO_PMS_DATA_SIZE.
 */
#define PSA_ALG_TLS12_ECJPAKE_TO_PMS            ((psa_algorithm_t)0x08000609)

/* The size of a serialized K.X coordinate to be used in
 * psa_tls12_ecjpake_to_pms_input. This function only accepts the P-256
 * curve. */
#define PSA_TLS12_ECJPAKE_TO_PMS_DATA_SIZE 32

Importing a key without knowing its exact type

The Crypto API currently only supports importing a key where the caller specifies the key type. The required format for the key is typically just the key value itself.

There are numerous applications where a key is provided to the application, embedded in data that also provides key type and usage information. Providing a standard API to decode data from common key formats into a key would benefit application developers. Both by removing the effort to implement, or integrate, code that does this; and reducing the risk of incorrect (vulnerable) implementations of this code.

Key formats that are worth considering for such an API include those defined in:

  • X.509
  • COSE (CBOR Object Signing and Encryption): see RFC 8152 §13

Are there any others?

Possible ambiguity in documentation of `xxx_setup()` functions

The draft PAKE extension, in the documentation of psa_pake_setup(), states:

If an error occurs at any step after a call to psa_pake_setup(), the operation will need to be reset by a call to psa_pake_abort().

This seems to leave some ambiguity as to whether the user needs to call psa_pake_abort() when psa_pake_setup() returns an error, as there are two reasonable interpretation of that sentence:

  1. The call to psa_pake_setup() is not a "step after a call to psa_pake_setup()", so is not covered by that statement, so the user might not be required to call psa_pake_abort() when psa_pake_setup() fails.
  2. If psa_pake_setup() returns an error, that's an event that happens after psa_pake_setup() was called, so that statement applies to it and the user is required to call psa_pake_abort().

Actually, all xxx_setup() functions seem to follow this pattern.

Note: @gilles-peskine-arm points out that general conventions so if xxx_setup() failed it leave the operation in an undefined state, so clearly if you want to re-use that operation object you need to call xxx_abort() on it, in order to bring it back to a defined state first - but what if you don't want to re-use it? Do you still need to call xxx_abort() on it (under possible pain of leaking resources or leaving your client-server setup in an inconsistent state)?

EdDSA signature algorithm, with a non-trivial context

The Crypto API has had support for the EdDSA signature algorithm since v1.1.0. Current support includes the HashEdDSA variants of this algorithm, and the PureEdDSA variant with a default (empty) context.

PureEdDSA is also defined for use with a non-trivial context parameter. See the definition of Ed25519ctx and Ed448 in RFC 8032 §5.1 and §5.2.

These forms of EdDSA cannot be implemented with the current Crypto API (see the note against PSA_ALG_PURE_EDDSA). Additional API functions would be required so that a context parameter can be provided to the signature and verification operations.

Mismatch between Mbed TLS documentation and J-PAKE specification (RFC8236) in mbedTLS 3.3

The specification of PSA_ALG_JPAKE (in crypto_extra.h as well as in psa_crypto_api_pake_ext.pdf) includes mandatory calls to psa_pake_set_user() and psa_pake_set_peer() to set the local and peer user ids. The implementation in mbedTLS 3.3, however, disallows calls to these functions (PSA_ERROR_NOT_SUPPORTED) and uses the default ids "client" and "server" instead. These ids are often used but are not mandatory according to the J-PAKE specification in RFC8236.

Is this a limitation of the current Mbed TLS implementation (and tests) or will the specification be changed accordingly?

Define API for Module-Lattice-based Key-Encapsulation Mechanism (ML-KEM aka Crystals-KYBER)

NIST has now published a draft specification for a Key Encapsulation algorithm based on Crystals-KYBER. The algorithm is designated ML-KEM, and will be published as FIPS 203. The draft (open for review until 22 Nov 2023) can be downloaded from https://csrc.nist.gov/pubs/fips/203/ipd.

NIST is planning to standardize three parameterized variants of ML-KEM: ML-KEM-512, ML-KEM-768, ML-KEM-1024, which provide increasing levels of security for increasing computation and size of keys and encapsulated key values.

The Crypto API should define an API for using this algorithm.

The limitation of a single active installation process is 'hidden' in the API reference

In the description of psa_fwu_install (function), it states that only one set of components can be in the process of being installed at any one time. And only when the current set of components have been installed or rejected (no longer in STAGED, TRIAL, or REJECTED state), can a new set be installed.

This behavior is essential for correct and secure operation of the update process, and should also be clearly described somewhere in §4 Programming model chapter.

Relax the definition of 'persistent staging'

The investigation for the solution (PR #49) for issue #5 identified some challenges in the definition of 'persistent staging' for the API. This was partially exposed by #48, where the effect of 'volatile staging' on the state model is explicitly documented.

As currently defined, the state model for 'persistent staging' — components that do not set the PSA_FWU_VOLATILE_STAGING flag — presents challenges for implementations to manage the persistent state model in flash memory. The investigation also identified that most OTA solutions today do not expect downloaded firmware images (in particular, partially downloaded ones) to be retained over a reboot.

It seems that the default position for the FWU API, where a partially written image (state WRITING) is preserved on reboot, is both difficult to implement efficiently, and significantly beyond current practice.

Additionally:

  • Resuming the download of a partially prepared image (in WRITING state) following a reboot, requires synchronization between the update server, update client, and the firmware store, to determine the current status. The assumption (in the current spec) is that this will require implementation-defined API to achieve.
  • Retaining a fully prepared image (in CANDIDATE state) over a reboot, has substantially less complexity, and might be important for some use cases. For example, if the 'download' and 'apply update' commands from the update server to the update client are separate, and could be separated by a significant time delay.

Therefore, there is value in having the API permit an implementation to support persistence of CANDIDATE images, but not of WRITING images.

Proposal

Relax the definition of 'persistent staging', so that this only additionally requires CANDIDATE state to be retained after reboot (compared to volatile staging), and allow it to be implementation-defined whether WRITING, FAILED, and UPDATE states are retained/visible following a reboot.

We could also define additional component flags to indicate if WRITING, FAILED, and UPDATE states are persistent? Alternatively, we we could defer standardization of such flags, and also any API that might support resuming a download, until we have a concrete implementation of that use case.

We could rename the current PSA_FWU_VOLATILE_STAGING flag, to better fit the finer-grained options that are proposed here.

Update to the PAKE API - should this remain a Beta?

There are a number of issues related to the PAKE API at the moment. The addition of support for SPAKE2+ (see #73), and issues identified during implementation of the current Beta API (see #86, #87, #88 and #89).

The resolution of these issues and enhancements should be combined into an update to the PAKE API. I see a couple of ways to release those changes:

  1. As another Beta of the PAKE Extension, on the assumption that we are uncertain that we have identified all of the API details for supporting this class of algorithm. In which case, we would aim to publish something like a "Crypto API 1.2 PAKE Extension Beta-1" document, as we are working on a 1.2 update to the main specification.
  • Alternatively, this could be upgraded to Final, on the assumption that we have identified the major details of the API via multiple implementations, and supporting multiple PAKE algorithms; and also on the expectation that application usage of the API will increase due to the use of SPAKE2+ in Matter and ECJPAKE in TLS 1.2, and API stability becomes important. In this case, we would aim to integrate the PAKE API into the Crypto API 1.2 (or perhaps 1.3) release, as we would no longer require a separate document.

Is there a strong argument for choosing one of these approaches over the other?

Crypto: add encodings for ChaCha variants

Add algorithm encodings for XChaCha20, XChaCha20-Poly1305, and a list of Salsa variants to be defined. These algorithms aren't currently implemented in Mbed TLS, which is why they weren't included in the original list, but it makes sense to have an encoding for them. They have been requested in Mbed TLS (XSalsa, XChaCha).

Note that we don't want to support every single algorithm that someone somewhere have thought of. Only algorithms that either correspond to current best practice and have some traction (used by at least one popular protocol or software), or that are no longer best practice but are still widely used.

SM2 cryptographic algorithms

The Crypto API already has definitions for the SM3 hash algorithm and a SM4 block cipher key type.

CSTC also defines the SM2 public-key algorithms for digital signature, key exchange, and asymmetric encryption.

The digital signature (DSA) and key exchange (KE) algorithms for SM2 require additional context information. Currently the Crypto API does not have a mechanism to provide this context data. This underlying issue is shared with some form of the EdDSA signature algorithm (see #18).

SM2 requires an identity-based value in the signature generation/verification and key exchange, referred to as ZA in the specifications. ZA is an SM3 hash of the identity of A (bit length ENTLA and identity string IDA), the curve domain parameters (a, b, xG and yG, for which there is a single set of recommended values), and A's public key (xA and yA).

In both DSA and KE, the domain parameters are fixed by the key/family, and the public key values are available to the Crypto API implementation for both parties. However, the identity string is not - RFC 8998 describes two values that must be used as the identity string for its uses of SM2. As a result, the Crypto API needs to enable an identity string to be provided as part of the signature, verification and key exchange operations.

API options

  • We could encapsulate the ID with the key-pair and public-key objects, although we would need a mechanism to specify this when creating a key. This is logically reasonable, as this is the 'identity' associated with the key.
  • We could extend the signature and key agreement APIs (e.g. provide additional variants) to take this additional context data. (This might also be useful for supporting EdDSA with non-trivial context).
  • We could implement a multi-part signature API and provide the signing keys identity information using a dedicated function for this operation.

References:

Public key cryptographic algorithm SM2 based on elliptic curves
Part 2: Digital signature algorithm
Part 3: Key exchange protocol
Part 4: Public key encryption algorithm
Part 5: Parameter definition

These are now paid-for documents from CSTC. The algorithms are also documented in an IETF draft: https://datatracker.ietf.org/doc/draft-shen-sm2-ecdsa/02/

Provide example definitions for some of the implementation-defined macros

The Crypto API spec provides definitions for most constant-value macros, such as algorithm identifier or key type values. The spec also provides example definitions for some of the function-like macros, typically also for macros that construct or query algorithm identifier or key type values. They are provided as examples because an implementation that implements a subset of the key types or algorithms may be able to simplify the macro definitions; or an implementation that provides additional key types or algorithms might need to extend the definitions.

Many of the macros, particularly the buffer sizing macros, are tagged as implementation-defined values. This is because these might identify limits that are specific to an implementation, or can depend on the implementation strategy.

A review of these macros, however, identifies that if we make a single assumption, most of the macros related to input and output buffer sizes are only dependent on the algorithm and key parameters supported by the implementation. The 'single assumption' is that the implementation does the minimal input and output buffering that is required to support the algorithm.

It might be beneficial for the specification to provide example implementations for all such macros, based on a full implementation of all algorithms and key types, and making that single assumption. Implementers can use, simplify, modify, or ignore these example definitions as appropriate.

Would this be a worthwhile exercise?

Support for SPAKE2+ in the Crypto PAKE API

There is a request to support SPAKE2+ in the Crypto API. This will require additions and changes to the PAKE extension API (currently beta), as SPAKE2+ is quite different in operation to EC J_PAKE.

An updated proposal document is available in #73 (the first draft was #65). A key comment in the proposal:

SPAKE2+ Version

SPAKE2+, an Augmented PAKE Draft 02, 10 December 2020 is considered for proposal.

Link : https://tools.ietf.org/pdf/draft-bar-cfrg-spake2plus-02.pdf

Remarks

  • SPAKE2+, an Augmented PAKE Draft 08, 5 May 2022 is the latest draft version. Link : https://datatracker.ietf.org/doc/pdf/draft-bar-cfrg-spake2plus-08
  • Shared Secret Key generation is not compatible between Draft 02 and 08.
  • As most SPAKE2+ implementations e.g. Matter Specification Version 1.0 are based on Draft 02, this version is being considered for better interoperability.

Asynchronous APIs

To effectively use hardware accelerators with deep pipelines, it is necessary to support asynchronous operation. Asynchronous operation is also needed to use slow secure elements without blocking for several hundred milliseconds.

Add API to support key attestation

It should be possible to request full information about a stored key: its attributes, but also where the key originates from: was it imported in clear text? wrapped? generated inside the key store? If it was generated on-board, what kind of hardware/software was used to generate it?
That information must be signed by another key assumed to be present in the same key store. Trusting that signing key allows a requester to also trust the attested key.

A key attestation function would take as input the ID of the key to be attested, the ID of the key attestation signing key, and return a signed token in a clearly defined format like an EAT token with a fixed list of mandatory claims and additional vendor extensions.

Support for bootloaders that do not report the outcome of an installation

Context

The initial implementation of the 1.0 API in TF-M v1.7, using MCUboot is not able to report the result of an installation that occurs during reboot, when the outcome is not a TRIAL state.

This issue was identified as part of the TF-M implementation of the 1.0-beta API. The implementation patch and review is: https://review.trustedfirmware.org/c/TF-M/trusted-firmware-m/+/17427.

The success or failure of the operation has to be deduced from the resulting version of the components after reboot:

  • If the new images are active, then the installation was successful, and the old images have been discarded
  • If the old images are active, then the installation failed, and the new images have been discarded

In both cases, this corresponds to READY state, and not to the UPDATED or FAILED states that are expected by the state model in the 1.0-beta API. See the state model variation for this use case in Appendix C.1.

When TRIAL state is used, then a successful installation will result in TRIAL state, but a failure will result in READY state, not FAILED, as expected in the state model (see Figure 7).

Analysis

Options for the API:

  1. Maintain the state model. This requires the Update service and bootloader to provide the missing information:

    • Enhance the bootloader to maintain extra states in the image metadata to model these states and report failure reasons
    • Store additional persistent metadata in the Update service to enable it to determine when a installation was attempted over a reboot, and what the outcome was
  2. Relax the state model, and permit an implementation to optionally include the clean transition in an installation that occurs over reboot/restart.

    • This requires that on systems which skip the UPDATED/FAILED state in such transitions, the Update client (or Update server) evaluate the component version on device startup to determine if an attempted update has succeeded or failed.

TF-M would prefer option (2): option (1) incurs additional design and code complexity on their implementation to support additional persistent states, and they feel the burden on the Update client created by option (2) is acceptable.

Option (2) reduces the ability for the Update client at startup to determine its next action, as the API does not provide a report of installation success or failure. However, apart from not having an explicit reason for failure, the Update client (perhaps working with the Update server) is still able to determine the appropriate next action after the event.

This type of implementation can meet still meet the expectation that expensive erase operations do not occur in API calls except psa_fwu_clean().

Proposal

Relax the state model behavior for transitions over reboot, and permit a STAGED component to transition directly to READY on both success and failure of installation, as appropriate for the particular variant of the state model.

The PAKE interface does not work easily with the Mbed TLS driver design

This issue has been replicated from a posting to the [email protected] mailing list, originally submitted by Oberon.

The suggested interface cannot be implemented in an opaque driver.

Opaque drivers are selected based on the key attributes provided. For a multi-part operation the driver is selected by the first function called. For PAKE this is psa_pake_setup(). However, no key is passed to this function. The only key involved is passed to the psa_pake_set_password_key() function which is called later and cannot be used for driver selection because the driver cannot be changed during a multi-part operation.

Unfortunately, the problem is not easy to solve. A change in the interface of the psa_pake_setup() function would work fine in most cases:

psa_status_t psa_pake_setup(
    psa_pake_operation_t *operation,
    const psa_pake_cipher_suite_t *cipher_suite,
    psa_key_id_t password,
    const uint8_t *user_id, size_t user_id_len,
    const uint8_t *peer_id, size_t peer_id_len,
    psa_pake_role_t role);

role, user_id, and peer_id are included because they are often needed to interpret the password value.

However, for some protocols the password hash cannot be calculated before some data is exchanged. For example, in a variant of SRP-6 the client first sends its public key to the server, the server then responds with the password salt and its own public key. The client therefore needs to calculate the public key before it receives the salt needed to calculate the password hash.

This issue is worth considering as part of the addition of support for SPAKE2+ to the PAKE API. See #73

Use Explicit Context in the Crypto API.

Legacy mbedtls crypto API uses explicit context, this removes the global variable and makes the thread-safety easily guaranteed .

We use mbedtls 2.x for Family of Apps in Meta Platforms. We chose it primarily because of the small binary size. Here are our use cases.

  • It is used by different libraries for different purposes.
    • As a TLS library for QUIC/HTTPS client
    • As a TLS library for MQTT client
    • As a crypto library for encryption, and decryption
  • It is linked as a single dynamic library for an App.
    • There is only one copy of mbedtls library in an App, but it may be instantiated as many different instances.
    • Each mbedtls instance is maintained by different teams for different workloads, so we like to have it independent of each other.
  • It is used in a multithreaded environment.
    • We archive thread-safety by running each mbedtls instance in its own thread.

When we started integrating mbedtls PSA crypto, one issue was the use of global_data in PSA crypto.

cc @hannestschofenig, @ronald-cron-arm, @daverodgman.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.