Code Monkey home page Code Monkey logo

utils's Introduction

RustCrypto: Utilities

Project Chat dependency status Apache2/MIT licensed

This repository contains various utility crates used in the RustCrypto project.

Crates

Name crates.io Docs MSRV Description
blobby crates.io Documentation MSRV 1.39 Decoder of the simple de-duplicated binary blob storage format
block-buffer crates.io Documentation MSRV 1.41 Fixed size buffer for block processing of data
block‑padding crates.io Documentation MSRV 1.56 Padding and unpadding of messages divided into blocks
cmov crates.io Documentation MSRV 1.60 Conditional move intrinsics
collectable crates.io Documentation MSRV 1.41 Fallible, no_std-friendly collection traits
cpufeatures crates.io Documentation MSRV 1.40 Lightweight and efficient alternative to the is_x86_feature_detected! macro
dbl crates.io Documentation MSRV 1.41 Double operation in Galois Field (GF)
hex-literal crates.io Documentation MSRV 1.57 Procedural macro for converting hexadecimal string to byte array at compile time
inout crates.io Documentation MSRV 1.56 Custom reference types for code generic over in-place and buffer-to-buffer modes of operation.
opaque-debug crates.io Documentation MSRV 1.41 Macro for opaque Debug trait implementation
wycheproof2blb Utility for converting Wycheproof test vectors to the blobby format
zeroize crates.io Documentation MSRV 1.60 Securely zero memory while avoiding compiler optimizations

License

All crates licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

utils's People

Contributors

aaron1011 avatar aewag avatar brxken128 avatar codahale avatar daxpedda avatar dependabot[bot] avatar djc avatar jrose-signal avatar kamilaborowska avatar kornelski avatar kriskras99 avatar lgfae avatar lucab avatar luisbg avatar maurer avatar mikelodder7 avatar newpavlov avatar nico-abram avatar npmccallum avatar nstilt1 avatar nugine avatar petertodd avatar rozbb avatar rpiasetskyi avatar rvolosatovs avatar silvanshade avatar striezel avatar tarcieri avatar thomascastleman avatar tirz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

utils's Issues

Replace blobby with Veriform?

I've been working on a serialization format similar to Protocol Buffers which has a semi-mature implementation at this point and fully supports no_std and heapless environments:

https://github.com/iqlusioninc/veriform

It also has custom derive support, making it easy to declaratively describe the structure of messages:

https://github.com/iqlusioninc/armistice/blob/develop/schema/src/provision.rs#L12

I definitely plan on building out more tooling for it, e.g. an easy way to dump the contents of a binary message as JSON and vice versa.

Should we consider switching to it over blobby?

Fix MSRV tests for blobby and cpuid-bool

The new lock format was introduced in Rust 1.41, so older versions can not read it. Should we re-generate lock files for MSRV tests or simply test only for Rust 1.41 (thus effectively making it MSRV for those crates, even though they should work on earlier versions)?

block-buffer yanks

I could not find reason why it was yanked. Could you provide some information, please?

Thanks!

pkcs8: password protected private key support

Hello,
I learned that PKCS#8 is a standard for storing and transferring private key information. The wrapped key can either be clear or encrypted, I can't find any information in crate PKCS8 about how to set a password protected PKCS8 wrapped key, does the crate PKCS8 has any module, function or trait to set a password to protect the PKCS8 wrapped key? Thanks!
image

proc-macro-hack 0.5?

It would be great if you would update proc-macro-hack dependency to latest 0.5 versions.

x509/pkcs8: clarifying relation between pkcs8::AlgorithmIdentifier struct and x509::AlgorithmIdentifier trait

There's a couple inconsistencies here I'd be happy to resolve but would like a maintainer to tell me if these are acceptable solutions or if there's something I don't know about dev direction I should take into account:

  • pkcs8::AlgorithmIdentifier is a struct that represents an X509 AlgorithmIdentifier, it contains an ObjectIdentifier and an optional params field
  • x509::AlgorithmIdentifier is a trait that defines associated methods that retrieve an ObjectIdentifier (defined as an associated type that implements AsRef<[u64]>) and the params
  • pkcs8::AlgorithmIdentifier does NOT implement x509::AlgorithmIdentifier
  • pkcs8::AlgorithmIdentifier DOES allow for retrieval of a slice of integers that represent the object identifier, however these are defined as u32 while the trait specifies a slice of u64
  • when using x509::write::{certificate, tbs_certificate}, the parameters which require an AlgorithmIdentifier take a generic x509::AlgorithmIdentifier and therefore the primitives defined in pkcs8 cannot be used

My questions are:

  1. Is the disconnect between pkcs8 and x509 intentional or something we should aim to resolve?
  2. Would a PR implementing x509::AlgorithmIdentifier on pkcs8::AlgorithmIdentifier be welcome?
  3. Should an ObjectIdentifier be a slice of u64 or u32?

Thank you for your time!

Padding requires too much overhead

The first encryption example on the README page is somewhat counter-intuitive: the 12-byte message `b"Hello world!" is encrypted to 16 bytes, but the buffer must be 24 bytes (it is actually 32 bytes in the example).

It turns out that none of the bytes buffer[16..] are ever read or even used for intermediate computations. Instead, the block_padding::Padding trait has a default method pad that requires buf.len() - pos >= block_size. This can result in a PadError despite there being enough space for the padding.

pkcs8: decryption/encryption support for EncryptedPrivateKeyInfo

#262 added an initial pkcs8::EncryptedPrivateKeyInfo type with basic parsing/serialization support. However, it doesn't actually support decrypting/encrypting PrivateKeyInfo yet.

Ideally we should only support algorithms which are known to be secure. The most commonly supported ones are based on 56-bit DES, however those provide no effective security as 56-bit DES has far too small a keyspace to be secure against brute force attacks. However, there is support for modern algorithms like AES and old-but-still-secureish algorithms like 3DES in newer revisions of PKCS#5:

  • PKCS#5 v1.5 supports PBE-SHA1-3DES.
  • PKCS#5 v2 adds support for AES encryption with iterated PRFs such as hmacWithSHA256 (a.k.a. PBES2)

It would probably make sense to wait for the cipher crate v0.3 release before attempting to implement decryption/encryption support in pkcs8.

Document undefined behaviour in the HMAC

By Ilary in int rust-crypto IRC channel:

There is subtle corner-case in HMAC that if hash output is longer than hash input block, HMAC becomes undefined (at least the RFC version)

build failed

λ rustc --version
rustc 1.38.0-nightly (311376d30 2019-07-18)
error: proc-macro derive panicked
  --> /Users/xavier/.cargo/registry/src/github.com-1ecc6299db9ec823/hex-literal-0.1.4/src/lib.rs:38:1
   |
38 | / proc_macro_expr_decl! {
39 | |     /// Macro for converting hex string to byte array at compile time
40 | |     hex! => hex_impl
41 | | }
   | |_^
   |
   = help: message: assertion failed: `(left == right)`
             left: `Some("#[allow(unused,")`,
            right: `Some("#[allow(unused")`
   = note: this warning originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

der: separate `sequence` and `message`

For some context, see this PR and the added TODOs: #285

Right now the concepts of SEQUENCE and "message" are somewhat colluded, and perhaps confusing, as the latter isn't specifically an ASN.1 concept.

It should be possible to work with ASN.1 SEQUENCEs directly without involving the "message" abstraction, whose goal is to provide a mostly declarative, higher-order abstraction for encoding logic.

Specific recommendations:

  • der::Encoder::sequence should be renamed to der::Encoder::message, and a new der::Encoder::sequence function factored out of the current implementation which is closer to der::Decoder::sequence, spawning a nested der::Encoder and yielding it to a provided callback.
  • der::sequence::encoded_len should probably be renamed to der::message::encoded_len, and the der::Message trait moved to der::message::Message

const-oid: add borrowed form of OID (e.g. ObjectIdentifierRef)

In certain cases it would be nice to be able to have a "borrowed" type which is backed by a byte slice containing the BER/DER encoding of an OID. This would be particularly nice in conjunction with the der::Any type, allowing conversions backed by references.

With a sufficiently powerful const fn we could even accept the OID as an array of integer arcs (or ideally, even a string!) and handle encoding the DER serialization at compile time. This is also possible with proc macros but ideally we could avoid those.

Remove byteorder-dependency

The byteorder-crate is not strictly needed since 1.32 and gets pulled in a lot due to RustCrypto. Is there interest in removing it, raising the minimum Rust version to 1.32 in the process?

Use ref in hex macro

Hello,

Is there a work arround to use ref instead of string literal in hex!?

let key = String::from("ff00ff00ff00ff00ff00");
hex!(key.as_str()); // expected one string literal

Does this macro should include format such as println!("{}", key)? If so I can make a PR but I'm not so sure.

Thanks

[pkcs8] SPKI null-byte preceding key bytes

Hi,

I'm trying to get the key bytes out of a public key in SPKI format but am getting unexpected results with a 0 preceding the key's bytes. Not really familiar with the matter so I might be misunderstanding something here.

I tried PEM and DER encoded public keys derived from a private key through:

openssl rsa -inform der -in rsa-2048-private-key.der -outform der -pubout > pub.pkcs8
openssl rsa -in rsa-2048-private-key.pem -pubout > pub.pem
    let pkcs = pkcs8::PublicKeyDocument::from_der(&fs::read("testdata/pub.pkcs8").unwrap()).unwrap();
    let pkcs_bytes = pkcs.spki().subject_public_key;

    // using ring to get a keypair out of the private key file
    use ring::signature::KeyPair;
    use ring::signature::{RsaKeyPair, UnparsedPublicKey};
    let key = fs::read("testdata/rsa-2048-private-key.der").unwrap();
    let key_pair = RsaKeyPair::from_der(&key).unwrap();
    let pub_key = key_pair.public_key();
    
    assert_eq!(pub_key.as_ref(), &key2.spki().subject_public_key); // fails
    assert_eq!(pub_key.as_ref(), &key2.spki().subject_public_key[1..]); // works

    // using the pem crate to parse the key
    let key = pem::parse(&pub_key2).unwrap();
    assert_eq!(pub_key.as_ref(), &key.contents[24..]) // manually skipping the header works, too

I can decode the key here: https://lapo.it/asn1js/#MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEArdF_BiYKSDz0urb6edkdrjWYyYhI02jZw8dpIy8iezVVqZmOPo9ya6dxYDp23ldZ5f5wrFA3kgGVymPcme-gw92MlmBy4mGF1EEpbQ4WKJjxtFmpHe75H0mSj4AwsfWuFDcuAGim0tG0ba_ufV1ySOvJYypqULa9brr0Rcg8SkS-EW317pXOF4TJlqSlKdAlHnvsZbQkU7lyLSsGgt-Somhlew5mmQhtiJ-jtXqXEtJyFBMpokBwScR0UhieL8hKTB6h5c2EgaC8NXSjhPvkJIjwykgy_zFuj91V-Oz2vX3_R6mogyEyP8McPfCvGDJ7KwhvvnO0B-NG2I3lwQDKfpWTTKGiCvvD2NM2ICWopKkDSj_Tqp6nqCyxdgR35IjsP4XcWyA9I4PggE4PYPB6yrewfvCsUMXAUB-VIPZJr8bSeZBj044vLbhHu0rOTTZkkcRBK2Iqd6w5ZJjuFRGrc4_PBJh_6lKVTbfR7_JvLzVChvhLWLwy1LHr88RLiK1bJuMa1Z7070m5Bbe13E4aNqUAaTUriro8UWVbHE0QRubym3as24u2013xt6Z5G7Iz_F-yJbU3H3UyxjjxgVDQQKbNOrhCWld--G_zynuW6HepJfTrYcSt4JSan2gbaMV-0HO9vCK-nKKrF7slLwX4YXaYbLwXI3MJrKytg3yRan8CAwEAAQ

which highlights the 0-byte at the end of the BitString header. Digging a bit into ASN.1 documentation, it seems that byte denotes the number of unused bits in the BitString, which DER requires to be 0: https://letsencrypt.org/docs/a-warm-welcome-to-asn1-and-der/#bit-string-encoding. Is it expected to include this byte in the subject_public_key while excluding the rest of the BitString header?

I'm running into this issue while retrieving PEM / SPKI encoded keys from a DB to verify a signature with ring, so I can't just change the format of the keys through openssl as the ring documentation suggests to do.

spki: replace AlgorithmParameters with Any

The spki crate defines an AlgorithmParameters enum containing Any, Null, or Oid variants.

This was hastily (re-)added in #267 to work around the absence of a conversion from ObjectIdentifier to Any, as Any borrows a backing buffer which needs to contain the DER serialization of an ASN.1 value.

In the time since, the const-oid v0.4.4 release changed the internal representation of an OID to its BER/DER serialization (see #317), and der v0.2.10 added a From<&'a ObjectIdentifier> for Any<'a> impl (see #317, #319).

It should now be possible to fully replace spki::AlgorithmParameters with der::Any, which is both simpler and more faithful to the ASN.1 schema for [AlgorithmIdentifier]. However, this is a breaking change so it can't happen until the next minor release of spki.

Use hex-literal to parse file containing hex string

For now, hex-literal can only operate on string literals:

let hex_array = hex_literal::hex!("00010203");

However, if I have a file, which contains the text "00010203" (not binary data), and I want to calculate its hex value by

let hex_str = include_str!("path/to/hex_file");
let hex_array = hex_literal::hex!(hex_str);

This can't compile because "expected single string literal". What can I do to achieve this at compile time?

base64ct doesn't check for "invalid padding"

Background: I'm currently writing my own base64 and PEM decoder in Zig and I'm basing my work on yours. I've been testing my implementation against the Zig stdlib's and notices some irregularities.

This implementation and the original by Sc00bz (along with some other ones I tested[1]) seem to be too accepting of encoded input. For example, the octet 0x1f is Hw/Hw== after encoding. However, if you pass the string H(x|y|z)(==)? to the decoder, it'll happily decode it as 0x1f. In other decoders[2], this produces an error - in Zig's case it returns InvalidPadding. I'm unsure if this happens because of the total lack of a LUT.

[1]: coreutils base64 -d, Go stdlib
[2]: libsodium, OpenBSD b64decode -r/b64_pton, Zig stdlib, base64 crate, radix64 crate.

dbl: remove unsafe code

Currently dbl uses unsafe code only for converting byte arrays to [u64; N] and back. We probably should change API in a such way which would allow us to remove it, e.g. by working on u32 and/or u64 arrays.

b64ct -> base64ct?

I originally created the b64ct crate as an extraction of the "B64" (in the PHC string format sense) code in the password-hash crate.

For context, "B64" is Base64 without padding (i.e. =) and whitespace, using the normal Base64 alphabet.

However, I've gone to use it for two other purposes relating to private key formats (which could really benefit from a constant-time implementation) which go beyond what is possible with the crate today:

  • pkcs8 is using it for its "PEM" (in the RFC 7468 sense) implementation, which requires chunking at 64 encoded characters (i.e. inserting newlines), and also padding with =
  • elliptic-curve is now using it for its JWK implementation, which requires Base64uri. It's doing this by running a substitution across a b64ct string (which largely defeats the point of a constant-time implementation)

I think it'd be good to expand b64ct into a base64ct crate which supports at least these three flavors:

  • "B64" for password-hash
  • "PEM encoding" for PKCS#8
  • Base64uri for JWT

error[E0658]: procedural macros cannot be expanded to expressions

When I used the latest crate to do test, I got the error message "error[E0658]: procedural macros cannot be expanded to expressions", I cannot find out what's wrong here :(, although, I'm a newbie to Rust lang.

[dependencies]
hex-literal = "0.3.0"
extern crate hex_literal;

use hex_literal::hex;

fn main() {
    const DATA: [u8; 4] = hex!("01020304");

    assert_eq!(DATA, [1, 2, 3, 4]);
}
$ cargo run
error[E0658]: procedural macros cannot be expanded to expressions
 --> src/main.rs:6:27
  |
6 |     const DATA: [u8; 4] = hex!("01020304");
  |                           ^^^^^^^^^^^^^^^^
  |
  = note: see issue #54727 <https://github.com/rust-lang/rust/issues/54727> for more information

error: aborting due to previous error

For more information about this error, try `rustc --explain E0658`.
error: could not compile `hhx`.

To learn more, run the command again with --verbose.

my environment as below:

$ rustc --version
rustc 1.44.1 (c7087fe00 2020-06-17)
$ uname -a
Darwin localhost 19.6.0 Darwin Kernel Version 19.6.0: Sun Jul  5 00:43:10 PDT 2020; root:xnu-6153.141.1~9/RELEASE_X86_64 x86_64

der: support lengths larger than 64kB

The der crate presently supports documents up to 64kB (i.e. 65535 bytes).

While having a maximum document size bound in general seems like a good idea, the specific choice of 64kB is imposed because the length decoder/encoder implementations do not support longer lengths, and only support encoded lengths of up to 3-bytes. Internally the Length newtype uses u32 and thus could potentially support a larger Length value.

It might make sense to expand this by one additional byte: 4-bytes total, 1-byte encoding the length-of-the-length, 3-bytes for representing the length itself as a big endian integer. That would raise the max document size to 16384kB (16777216 bytes).

Feature request, allow comments in hex-literal::hex macro

I would like to be able to use comments like so:

   hex!(
       "
        // header
        0001 006c 2112a44238656d797950694b78506e6e
        // username
        0006 0025 63636431623031363037000000
        // etc
        c057 0004 00010032 // or here
     ");

As / is not currently a valid character, it could be added without causing any breaking changes.

// would just mean ignore all characters until the next line.

If you are open to this idea, I am happy to make a PR.

unused import of core::slice in block-buffer

warning: unused import: `core::slice`
  --> /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/block-buffer-0.7.3/src/lib.rs:11:5
   |
11 | use core::slice;
   |     ^^^^^^^^^^^
   |
   = note: `#[warn(unused_imports)]` on by default

b64ct: subtle overflow bug

While working on the in-place decoding function, I noticed a subtle overflow bug. Currently decoded_len is implemented as:

pub const fn decoded_len(bytes: &str) -> usize {
    (bytes.len() * 3) / 4
}

For lengths greater than usize::MAX/3 it will produce incorrect results. Obviously in practice it's really unlikely that someone will work with more than gigabyte B64-encoded strings and on 64-bit platforms it will be practically impossible to trigger this bug. But nevertheless it's a clear bug.

The easiest solution would be to cast usize to u128 and back, but it's quite inelegant solution and it looks like a bit hard for compiler to properly infer properties of a resulting value... Another solution would be to use two branches for values bigger than usize::MAX/3 and not. And the last solution will be to use an alternative formula which would produce identical results.

The same applies to encoded_len as well, since it contains bytes.len() * 4.

u64;4

Hi,

How can I use it to produce u64;4 instead of u8?

Thanks! :)

Y2k still causing issues

I'm amazed and honestly a little excited if I actually found a Y2k bug 21 years after the fact, but I can't seem to encode UtcTime values when the Duration specified during construction is a date after December 31st, 1999 at 23:59:59.

This works:

let test_time = UtcTime::new(std::time::Duration::from_secs(946684799u64)).unwrap();
let der_encoded_time = test_time.to_vec().unwrap();

This does not work:

let test_time = UtcTime::new(std::time::Duration::from_secs(946684800u64)).unwrap();
let der_encoded_time = test_time.to_vec().unwrap();
thread 'x509::tests::test_serialization' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: Value { tag: Tag(0x17: UTCTime) }, position: None }', src/x509.rs:283:51

der: OPTIONAL incorrectly implemented + multi-tag support for CHOICE

...and unfortunately fixing it is somewhat tricky, especially when handling types that represent an ASN.1 CHOICE.

Right now OPTIONAL only works as the last field in a SEQUENCE. It can't skip a field if there is another field with a different tag. It should be implemented something like this:

impl<'a, T> Decodable<'a> for Option<T>
where
    T: Decodable<'a> + Tagged,
{
    fn decode(decoder: &mut Decoder<'a>) -> Result<Option<T>> {
        if decoder.peek() == Some(T::TAG as u8) {
            T::decode(decoder).map(Some)
        } else {
            Ok(None)
        }
    }
}

...but there's another problem: we need to be able to handle an OPTIONAL field which is also a CHOICE, which comes up right away in spki::AlgorithmParameters.

To make that work, the Tagged::TAG associated constant needs to be changed so a type can specify it supports multiple tags. One option would be to change it to be a slice of supported tags, although some investigation would be needed to determine what else might break if such a change were made.

Regardless, any solution here is a breaking change, so this can't be fixed until der v0.3 at the very least.

Feedback on collectable

I'm coming at this from the perspective of having need for an allocation-free X509 certificate generator, or more generally, the "DER encoder to end the plethora of semi-done DER encoders" :)

The feedback is that collectable (or a related trait for random insert e.g insertable or spliceable) should should have some kind of "(overlapping) memmove" trait (and not just an Vec::insert-style operation that is called repeatedly). std::Vec has splice, which is (maybe still?) slow: https://internals.rust-lang.org/t/add-vec-insert-slice-at-to-insert-the-content-of-a-slice-at-an-arbitrary-index/11008. In heapless-bytes, I implemented an insert_slice_at by hand: https://docs.rs/heapless-bytes/0.1.1/src/heapless_bytes/lib.rs.html#141-156.

A need arises in allocation-free TLV encoding (to know L, need to encode V or somehow know its encoding's length first - both in the same pre-allocated buffer). I think der::Encodable solves this kind-of at compile time, by requiring the encoded_length method in the trait, which someone has to implement on composite types (at least I think that's what happens - do correct me if I'm wrong). So you could easily do a to_heapless_vec implementation there; I assume part of the motivation for collectable is to abstract over this and get a to_collectable method.

Whereas derp::Der::nested (which I think is an extraction from ring) and x509::der::der_tlv both do an allocation. I fix this for my current use cases in asn1derpy::Der::nested (a temporary fork of derp) exactly by doing an insert_slice_at.

But I'd really like a shared combinator-style approach (like x509) without the allocations. Ideally then der would grow to cover both fixed types and dynamic encoding. Given both these libraries have prestigiously short names :)

The goal for me is an API where the generic n: usize capacity parameter (or N: ArrayLength<u8> en attendant min_const_generics) only needs to be specified once in the return type. Saying this from brute-forcing a heapless implementation of x509.rs in https://github.com/nickray/x509.rs/blob/heapless/src/der.rs#L107-L116, which has multiple issues:

  • explicit heapless::ArrayLength bounds all over the place (which I'd hope that a more developed collectable could avoid)
  • explicit type hints about the N everywhere
  • the mentioned allocations in x509's der_tlv implementation (which could be replaced by asn1derpy's mem-move, using collectable, if this had some kind of insert_iterator_at / efficient splice.

What do you think? :)

hex-literal: accept list of literals?

Right now, if we are to write code like this:

hex1!("
    AABBCCDD
    00112233
");

rustfmt transforms it into:

hex1!(
    "
    AABBCCDD
    00112233
"
);

Frankly it's really annoying, but I don't think we will get a fix on the rustfmt side. I know that there is a workaround:

hex!(
    "AABBCCDD
     00112233"
);

But to me personally it looks a bit weird.

So how about we will extend the hex! macro to allow list of literals which would be concatenated after decoding? Not only it would help with the formatting issue, but also would allow to add comments which would be properly highlighted in IDEs, unlike the extension added in #512:

hex!(
    "AABBCCDD" // comment
    "00112233"
);

What do you think? It looks like this change should be completely backwards compatible.

cpufeatures: incongruities between ARM64 hwcaps and Rust SHA* target features

Following up from #393:

The output of rustc --target=aarch64-unknown-linux-gnu --print target-features shows the following cryptography-related target features on ARM64 (which is the same for aarch64-apple-darwin)

    sha2                               - Enable SHA1 and SHA256 support.
    sha3                               - Enable SHA512 and SHA3 support.

This suggests that sha2 implies SHA-1 support, and that sha3 implies SHA(2)-512 support (but curiously, that the aforementioned sha2 does not).

This does not consistent with the Linux hwcaps or the ARM registers they map to (or for that matter, the algorithm families the respective algorithms belong to):

https://www.kernel.org/doc/html/latest/arm64/elf_hwcaps.html

HWCAP_SHA1: Functionality implied by ID_AA64ISAR0_EL1.SHA1 == 0b0001.
HWCAP_SHA2: Functionality implied by ID_AA64ISAR0_EL1.SHA2 == 0b0001.
HWCAP_SHA3: Functionality implied by ID_AA64ISAR0_EL1.SHA3 == 0b0001.
HWCAP_SHA512: Functionality implied by ID_AA64ISAR0_EL1.SHA2 == 0b0010.

On macOS targets, support for sha1 and sha2 is implicit, however support for SHA-512 and SHA-3 intrinsics can be queried through sysctl(3).

Question:

How should cpufeatures handle this mapping? For example, should checking for support for the sha3 target feature using cpufeatures test for support for both SHA(2)-512 and SHA-3, and return false unless both are available?

Sidebar: do the Rust target feature mappings actually make sense here? Should sha3 perhaps be decoupled from SHA(2)-512?

Missing sodium_add and sodium_increment functions

Currently there is no analogs of functions sodium_add and sodium_increment in rust crypto. Also would be good to have support for both variants that treat nonces as big endian and little endian numbers. For instance in tox we have a revese because libsodium supports only little endian option.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.