Code Monkey home page Code Monkey logo

tls-parser's Introduction

Github CI

Rusticata

Overview

Rusticata is a test crate for network protocol parsers written in Rust.

It was written to show to feasibility of the implementation of safe and efficient parsers in suricata. The real parsing code is now part of suricata (starting from version 4.0), and must be configured using the --enable-rust flag.

This project is now a playground for testing parsers, features and code.

This project is based on:

Build

Run cargo build for a build in debug mode, cargo build --release for release mode.

Use cargo install to install the library, or set the LD_LIBRARY_PATH environment variable.

Testing

rusticata is mostly used to decode application layers in the pcap-analyzer project. See its documentation for examples.

License

This library is licensed under the GNU Lesser General Public License version 2.1, or (at your option) any later version.

tls-parser's People

Contributors

aguinetsb avatar algesten avatar andrew-finn avatar chifflier avatar cpu avatar dependabot[bot] avatar geal avatar jackliar avatar rukai avatar xonatius avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

tls-parser's Issues

OID_REGISTRY is not public

static ref OID_REGISTRY: OidRegistry<'static> = {
        let reg = OidRegistry::default().with_all_crypto().with_x509();
        // OIDs not in the default registry can be added here
        reg
    };

Basic parse_tls_plaintext example does not work

I'm trying out this package and it took me a while to figure out what was wrong with using it. I think the example given in the README no longer works. It looks like the upgrade to nom probably broke it, as IResult is now an alias to a standard result type and so doesn't have the Done enum options.

nom4 release

hello! :)

I've successfully ported my application to nom4 using your nom4 branch. Is there anything blocking this branch from being released? Is there any way to help?

Thanks!

Parsers visibility and Nom dependency

Hello,

For a project I need to quickly parse only some Tls records of a session.
So for lessening the amount of alloc, I'd like to be able to parse only a TlsRecordHeader without its data and then only parse the content if needed.
I saw the parse_tls_raw_record functyion but it needs a complete record to work.
What I'd like to write is something like this:

let data = [0x16, 0x03, 0x01, 0x00, x051];
let (rem, header) = parse_tls_record_header(&data[..]);

//Do something with header

I have an other question concerning the Nom dependency this time.
I need to add nom as a dependency in order to use your crate even if I don't use it for anything else.
Wouldn't it be better for your crate to use "pub extern" to reexport the IResult from Nom ?
I know this practice is under debate but I think it's very inconvenient to have to pull an entire crate to use an other one.

Thanks for your time

Parsing Client and Server extensions

Followup from #11

Having specific functions to parse client/server extensions would be nice, and would allow accepting only the expected extensions in the given context.

At the moment, I'd be inclined to

  • keep a generic parse_tls_extension (accepting any valid extension)
  • add parse_tls_client_hello_extensions and parse_tls_client_hello_extensions

SNI compatibility

We've started using this crate to pre-parse the SNI and do some async work before handing it off to the synchronous rustls handshake.

We've stumbled on some issues parsing SNI in some scenarios. I'm not sure if it's just a question of different clients sending different TLS extension formats.

Here's an error we get:

Error(([170, 170, 0, 0, 0, 0, 0, 23, 0, 21, 0, 0, 18, 119, 119, 119, 46, 97, 109, 105, 114, 97, 104, 122, 97, 107, 121, 46, 99, 111, 109, 0, 23, 0, 0, 255, 1, 0, 1, 0, 0, 10, 0, 10, 0, 8, 218, 218, 0, 29, 0, 23, 0, 24, 0, 11, 0, 2, 1, 0, 0, 35, 0, 0, 0, 16, 0, 14, 0, 12, 2, 104, 50, 8, 104, 116, 116, 112, 47, 49, 46, 49, 0, 5, 0, 5, 1, 0, 0, 0, 0, 0, 13, 0, 20, 0, 18, 4, 3, 8, 4, 4, 1, 5, 3, 8, 5, 5, 1, 8, 6, 6, 1, 2, 1, 0, 18, 0, 0, 0, 27, 0, 3, 2, 0, 2, 122, 122, 0, 1, 0], Tag))

If I decode to UTF8 lossy, I get:

��www.amirahzaky.com�
���#h2http/1.1
zz

It looks like the data is in there? The error doesn't say this is incomplete, so I assume this is a parser issue?

Thanks for creating this, it's a huge time save for us.

This is how I'm parsing it btw:

    let acceptor = store.get(&svc, &metrics, should_h2).await?;
    let mut bytes = [0; 1024];
    let n = io.peek(&mut bytes).await?;
    log::trace!("read {} bytes from tls handshake", n);
    let res = tls_parser::parse_tls_plaintext(&bytes);
    match res {
        Ok((_rem, record)) => {
            // rem is the remaining data (not parsed)
            // record is an object of type TlsRecord
            log::trace!("record: {:?}", record);
            match record.msg.get(0) {
                Some(tls_parser::tls::TlsMessage::Handshake(
                    tls_parser::tls::TlsMessageHandshake::ClientHello(hello_contents),
                )) => match hello_contents.ext {
                    Some(exts) => match tls_parser::tls_extensions::parse_tls_extension_sni(exts) {
                        Ok((_rem, tls_parser::tls_extensions::TlsExtension::SNI(sni))) => {
                            match sni.get(0) {
                                Some((tls_parser::tls_extensions::SNIType::HostName, hostname)) => {
                                    let hostname = std::str::from_utf8(hostname)?;
                                    log::debug!("got hostname: {}", hostname);
                                    store.ensure_cert(svc, should_h2, hostname).await;
                                }
                                _ => log::warn!("unhandled SNI Type!"),
                            };
                        }
                        Err(e) => {
                            log::debug!("error decoding sni: {:?}", e);
                        }
                        _ => {
                            log::warn!("did not find SNI");
                        }
                    },
                    _ => {
                        log::warn!("could not get client hello contents");
                    }
                },
                _ => log::warn!("didn't care about message type..."),
            };
        }
        Err(nom::Err::Incomplete(needed)) => {
            log::error!(
                "Defragmentation required (TLS record), needed: {:?}",
                needed
            );
        }
        Err(e) => {
            log::debug!("parse_tls_plaintext failed: {:?}", e);
        }
    };

(I get the error at error decoding sni line)

Ergonomics of the tls_serialize module?

All of the functions in the tls_serialize module require passing a (&mut [u8], usize).

There a couple of issues:

  1. I'm not entirely sure what the usize is. If it's the size of the buffer, why not use the slice's length?

  2. I'm not sure how large of a buffer to pass in. I'd like to avoid arbitrary lengths/sizes in my own code, and I'd like to avoid innate knowledge about TLS's serialization layer, as that's really the responsibility of the library to know this. Ideally, I'd like to pass in a Write, or have the return be a Result<Vec<u8>, _>; I don't want to have to know, a priori, how to calculate a buffer of the correct length.

(Looking at the source code, it seems like a lot of this is pass-through to cookie_factory, and I'm not sure how to correct it. But it seems odd to me that a serialization library like that would require or use function signatures like this, but perhaps that is the case here?)

TLS Extension length missing when serializing

When serializing a TLSPlaintext Client Hello using the gen_tls_plaintext method in Version 0.9.2 the extension length is missing.

In this method https://github.com/rusticata/tls-parser/blob/tls-parser-0.9.2/src/tls.rs#L577 the extension length gets removed and is no longer in the TlsMessageHandshake::ClientHello::ext.
When serializing this member the length should get added back here: https://github.com/rusticata/tls-parser/blob/tls-parser-0.9.2/src/tls_serialize.rs#L184 .

Example on how this could be fixed: tls-parser-0.9.2...conblem:fix-tls-client-hello . The current tests sadly did't catch this bug: https://github.com/rusticata/tls-parser/blob/tls-parser-0.9.2/src/tls_serialize.rs#L384 . If i find time I will write a correct test case.

Parsing server SNI response fails

Looking at the code (and based on the behavior I'm witnessing), the parse_tls_extension_sni_content requires the 16-bit length to be present unconditionally for the SNI extension parsing to succeed:

pub fn parse_tls_extension_sni_content(i: &[u8]) -> IResult<&[u8], TlsExtension> {
let (i, list_len) = be_u16(i)?;
let (i, v) = map_parser(
take(list_len),
many0(complete(parse_tls_extension_sni_hostname)),
)(i)?;
Ok((i, TlsExtension::SNI(v)))
}

However the server is supposed to leave the extension data empty when it responds to the client:

A server that receives a client hello containing the "server_name" extension MAY use the information contained in the extension to guide its selection of an appropriate certificate to return to the client, and/or other aspects of security policy. In this event, the server SHALL include an extension of type "server_name" in the (extended) server hello. The "extension_data" field of this extension SHALL be empty.

Given the extension data may differ between client/server hello, I'd guess there would be some need for context specific parsing implementation.

Currently I'm working around this by grabbing the header length by hand and parsing each extension individually, which allows me to skip the failing SNI here.

TLS ciphers miss TLS_EMPTY_RENEGOTIATION_INFO_SCSV

According to RFC5746

In order to enhance compatibility with such servers, this document defines a second
signaling mechanism via a special Signaling Cipher Suite Value (SCSV)
"TLS_EMPTY_RENEGOTIATION_INFO_SCSV", with code point {0x00, 0xFF}.
This SCSV is not a true cipher suite (it does not correspond to any
valid set of algorithms) and cannot be negotiated.

This has made the ja3 rust implementation calculate a wrong ja3 TLS fingerprint when there is a TLS_EMPTY_RENEGOTIATION_INFO_SCSV cipher in client hello.

Currently I haven't got time to figure out how to automatically insert this cipher into the tls-ciphersuites.txt file, so raise a issue here first. If you guys got time to discuss and fix this problem, it would be wonderful :)

You can use this sample packet to reproduce this issue

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.