Code Monkey home page Code Monkey logo

ares's Introduction

Hello World

     

🐝 Autumn | 💻 Site Reliability Security Engineer | 🌏 London, UK

About me

I do site reliabiltiy security engineering with a focus on identity & access management.

Checkout my Blog Posts here

Author of: CVE-2024-32152 - LaTeX Blocklist Bypass vulnerability CVE-2024-29073 - Latex Incomplete Blocklist Vulnerability CVE-2024-32484 - Flask Invalid Path Reflected Cross-Site Scripting (XSS) vulnerability CVE-2024-26020 - MPV script injection vulnerability

trophy

$$\ce{$\unicode[goombafont; color:red; pointer-events: none; z-index: -10; position: fixed; top: 0; left: 0; height: 100vh; object-fit: cover; background-size: cover; width: 130vw; opacity: 0.5; background: url('https://raw.githubusercontent.com/cryptolake/cryptolake/master/mandel.jpg');]{x0000}$} ✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨ </div> <!-- **trinwin/trinwin** is a ✨ _special_ ✨ repository because its `README.md` (this file) appears on your GitHub profile. Also I stole this off of Trinity this is 100% theirs. -->$$

ares's People

Contributors

bee-san avatar burning-eggs avatar dependabot[bot] avatar gregorni avatar oddron avatar skeletaldemise avatar swanandx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ares's Issues

[Bug] Ares tries reciprocal encodings twice

Ares will do combinations of encodings that cancel each other out due to them being reciprocal. This wastes time and it makes the output confusing.
Atbash and reversed text are examples of this, as encoding/decoding is the same.
Example:

Is the plaintext 'https://github.com/bee-san/Ares' which is The plaintext is Uniform Resource Locator (URL). [Y/n]?
y
SUCCESSFUL 😁
PLAINTEXT: "https://github.com/bee-san/Ares"
DECODERS USED: Reverse -> Reverse

Implement Git Cliff into CI to generate nicer changelings

Git Cliff is a Rust-based CLI tool:
https://github.com/orhun/git-cliff

That generates nicer change logs based on commits. It uses conventional commits to generate changelings like so:

# Changelog

All notable changes to this project will be documented in this file.

## [unreleased]

### Bug Fixes

- Fixed a bug in new_strings (#45)

### Features

- Human chekers (#25)
- New bfs for exiting early (#42)
- Config gets initialized to default if it wasn't set (#57)
- Remove unused deps and set lto to thin
- Set codegen-units to 1 and lto to fat
- Return CrackResult instead of decoder names in path

<!-- generated by git-cliff -->

If we change our PR templates to use these, we can get some nicer automated release logs :)

We can put it into a GitHub action:
https://github.com/orhun/git-cliff#github-actions

Add existing encodings/ciphers from Ciphey

This is a list of all the currently supported encodings/ciphers in Ciphey.
We need to make them in Rust for Ares to achieve feature parity.

Encodings

  • Base2 (Binary)
  • Base8 (Octal)
  • Base10 (Decimal)
  • Base16 (Hexadecimal)
  • Base32
  • Base58 Bitcoin
  • Base58 Flickr
  • Base58 Ripple
  • Base58 Monero (Not in Ciphey)
  • Base62 #137
  • Base64
  • Base64 URL
  • Base69
  • Base85
  • Citrix CTX1 (Not in Ciphey)
  • Z85
  • ASCII Base85
  • Base91
  • Base65536
  • ASCII
  • Reversed text
  • Morse Code
  • DNA codons
  • Atbash
  • Standard Galactic Alphabet (aka Minecraft Enchanting Language)
  • Leetspeak
  • Baudot ITA2
  • URL encoding
  • SMS Multi-tap
  • DMTF
  • A1Z26
  • Prisoner's Tap Code
  • UUencode
  • Braille (Grade 1)

Ciphers

  • Caesar Cipher
  • ROT47 (up to ROT94 with the ROT47 alphabet)
  • ASCII shift (up to ROT127 with the full ASCII alphabet)
  • Vigenère Cipher
  • Affine Cipher
  • Railfence Cipher (Not in Ciphey)
  • Binary Substitution Cipher (XY-Cipher)
  • Baconian Cipher (both variants)
  • Soundex
  • Transposition Cipher
  • Pig Latin

Modern day cryptography

  • Repeating-key XOR
  • Single XOR

Esoteric languages

  • Brainfuck

Compression Methods

  • GZip

[BUG] Timer bugs

From #93

  • Get timeout duration from config
  • Change return type to return an TimeOutError ??
  • recv(result_recv) -> exit_result => { // if we find an element that matches our exit condition, return it! // technically this won't check if the initial string matches our exit condition // but this is a demo and i'll be lazy :P let exit_result = exit_result.ok(); // convert Result to Some if exit_result.is_some() { trace!("Found exit result: {:?}", exit_result); return exit_result; } },
  • This error can be nicer error!("TIMEOUT!!!");
  • Create CLI argument for timeout

[BUG] Morse code is broken

I do not think it likes line feeds and I think a lot of delimiters are not supported, we should add them!

Docs Ideas

  • Turn on clippy lint for .expect / .unwrap and handle them properly
  • doc strings for all!
  • reverse encoder can be our example. we can heavily comment it

[BUG] Use a different wordlist

The English checker sucks because the wordlist used does not feature words like "swan" or "test". We need to create or find a new wordlist to use

Citrix CTX1 is broken

cargo run -- -t '6CeYA2mXQ8iXm8pTmmujrB8g4G8dNSCgTV9Cvk25KQmS'
   Compiling project_ares v0.4.0 (/Users/autumnskerritt/Documents/src/Ares)
    Finished dev [unoptimized + debuginfo] target(s) in 1.84s
     Running `/Users/bee/Documents/src/Ares/target/debug/ares -t 6CeYA2mXQ8iXm8pTmmujrB8g4G8dNSCgTV9Cvk25KQmS`
thread '<unnamed>' panicked at 'attempt to subtract with overflow', src/decoders/citrix_ctx1_decoder.rs:81:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Add File Input

For the bot: we can make it open a pastebin link as an alternative to file input 🤔

Non determinism tests

TODO: Write tests to determine non-determinism.

Idea: CI check only that runs cargo test 100 times and if one of them fails, it tells us.

Non-determinism was a hugeeeeeee fucking problem!!!!! in Ciphey so its important we catch this ASAP :pepe_pray:

[BUG] [Tech Debt] BFS is slower than we'd like

#107

Every Text struct contains a field:

text: Vec<String>,

Which contains every decoding (even if it's just 1 decoding).

After each iteration of BFS we loop over all of our vectors we have collected and flatten them from a Vec<Vec<Text>> to a Vec<Text> where text: Vec<String> has a single element (meaning we can index into the 0th element [0] and get the result).

        let mut new_strings_to_be_added = Vec::new();
        for textStruct in new_strings{
            for decoded_text in textStruct.text{
                new_strings_to_be_added.push(
                    Text {
                        text: vec![decoded_text],
                        // quick hack
                        path: textStruct.path.clone(),
                    }
                )
            }

This introduces some issues:

  1. It is not efficient to have a O(n^2) loop after our big main loop
  2. We are using .clone() here which is slow
  3. It is not efficient to make a new vector with just 1 element
  4. It does not support nice paths. If it was Caesar, it won't say "Caesar with shift of 13" for example

This is a hack to get us to support this feature.

In the future we might want to look at:

Editing this bit of code so it supports Vectors of Vectors:
https://github.com/bee-san/Ares/blob/1bcc994052db17e7db948055d012810353c76f4d/src/searchers/bfs.rs#LL32-L33C69

Or adding another loop below this code so it turns into a O(n^2) nested loop:

while !current_strings.is_empty() {

[BUG] Ares cannot decode base64 -> caesar

In PR #88 Ares cannot decode it in this order:

cargo run -- -t 'nTIfoT8tq29loTD='
    Finished dev [unoptimized + debuginfo] target(s) in 0.15s
     Running `target/debug/ares -t nTIfoT8tq29loTD=`
FAILED 😭

cargo run -- -t 'dXJ5eWIgamJleXE='
    Finished dev [unoptimized + debuginfo] target(s) in 0.18s
     Running `target/debug/ares -t dXJ5eWIgamJleXE=`
I think the plaintext is a English.txt.
Possible plaintext: 'hello world' (y/N):
y
SUCCESSFUL 😁
PLAINTEXT: "hello world"
DECODERS USED: base64 → Caesar Cipher

Log Analysis

[2022-11-17T15:09:24Z TRACE ares::checkers::english] Checking word aGVsbG8gd29ybGQ= with words_found 0 and input length: 16

Ares decodes the caesar cipher (rot13) to the correct Base64 string:
image

To succeed all it needs to do is run base64 on it.

On the next round of breadth first search Ares reports it does not have any decoders:

[2022-11-17T15:09:24Z TRACE ares::searchers::bfs] Refreshed the vector, []

[BUG] Library API is broken

Describe the bug
Since the Human checker, the library API has been broken. This is because it sits on "yes". We need to pass a config to turn it off. This should be always true, but the CLI should default to "Api=False"

[BUG] When the plaintext has significant invisible characters alert the user

Currently we print to the terminal which may obscure invisible characters. I propose that when the plaintext is made up of 20 or 30% invisible characters we alert the user with something like:

❓ 41% of the plaintext is invisible characters, would you like to save to a file instead? (y/N)
y

Please enter a filename: (default: $HOME/ares_text.txt)

Search Nodes & Edges - What should they look like

Notion Page Link <--- This has better formatting and looks nicer

Problem

When we perform the decryptions, one of the questions we have to solve is:

How do you know what decryption to do next?

This is solved by a search algorithm.

This proposal will be for the search algorithm for Ares.

Solution

Our nodes will be the decryption text, and our edges will be the decryption modules.

The struct for nodes should look like (taken from here):

struct Nodes<V> {
    children: Vec<Nodes<V>>,
		parents: Vec<Nodes<V>>,
    value: V
}

Where the value is the decoded text.

A simple Breadth First Search is:

impl<V> Nodes<V> {
    fn bfs(&self, f: impl Fn(&V)) {
        let mut q = VecDeque::new();
        q.push_back(self);

        while let Some(t) = q.pop_front() {
            (f)(&t.value);
            for child in &t.children {
                q.push_back(child);
            }
        }
    }
}

Recording Parents

image

Because each edge (decoding) takes one input and produces one output, following the path to the parents is easy.

All we need to do is when calling the child, tell it what node we are currently on. We can do this by appending to the vector.

To keep the last X nodes, we can use a First in First Out queue. When the length of the vector reaches X, we remove the first node.

Edges

Because each edge is a decoding / decryption algorithm, we need something along the lines of an array of all decryption objects we can run.

This array should be sorted according to Godly Searcher, it has to be sorted because:

  1. If we do all decodings first, when will we get to the decryptions?
  2. And vice-versa.

Our first searcher will be breadth-first-search, so we do not need to sort them.

Ideally instead of keeping the edges of a vector in a node, we keep a global list. So something like this:

image

Global Array of Edges, Nodes, and their heuristic score.

This way, we always do the node with the best heuristics next. If we didn't keep a global list, we'd have to expand node-by-node which isn't optimal because perhaps after Rot13 on text2 → text3 the next best choice is Binary Code on text1 → text4, not whatever else is attached to Text2.

Because of our idea of only using the ones that might work (entropy, index of coincidence) each node will have to add to the array its possible edges.

In a way, this array is a queue of which nodes (text) to expand using what edge (decoder).

Multi-threading

There are quite a few ways we can multi-thread this, the most important idea is that the queue that every node adds to has to be locked.

  1. Rayon

    Because our data is:

    • In an array

    And we

    • Run the function .crack() on all the edges

    We may be able to easily use Rayon here.

  2. We manually create threads one at a time

    • We create one thread per edge, and each thread will obtain a lock on the queue system
  3. We divide the arrays and thread those mini-arrays

    • There is a context switching cost to (2), we can avoid that with (3).
  4. We use Async

    • Async has less of a context switching cost and allow us to do one async thread per edge.

I am leaning towards Rayon as it's easier.

What will the Heuristic be?

There are a few things I want to achieve.

Speculative Galloping

Let's say our current decryption tree looks like:

image

We have successfully decoded Base64 twice in a row. Because Base64 fails to decode if it's not Base64 (sometimes it can succeed if lucky, but generally it fails) we will make the assumption that all decodings are Base64.

This pattern is common. You have a string that's encoded with Base64, Base32, whatever multiple times and you have to decode it.

Speculative Galloping comes from the Timsort algorithm.

In Timsort, roughly, if our array looks like [1, 2, 3, 4, 5] we can assume the rest of it is "sorted" and thus we "gallop" untill we see it's unsorted

In our speculative galloping, we speculate that the plaintext was encoded with Base64 (for example) multiple times. So we "gallop" by only doing Base64 until it fails.

Eventually, Base64 will either:

  • Come across a string it can't decode
  • Or will trigger the checkers

And we win!

We make the edge case where the encoded text is Base64 100 times much faster.

All Decoders are Equal, but some are more equal than others

There are a lot of decoders. Some of them fast, some of them popular. For example — Minecraft Enchanting Table is not as popular as Base64.

For that reason, we can prioritise decoders.

We can set:

  • Popularity ratings (Like name-that-hash and pywhat). The more popular something is, the higher it's number! From 0 → 1.
  • Speed It is much quicker to decode Base64 or Binary then it is to decode some other tthings. This is because if you see the number "7" in binary, you know it's not binary so you can early stop. We can benchmark the speed using the Filter System.

Minecraft Enchanting Table and XOR? Unlikely buddy.

It's unlikely to see a chain of decryptiions like:

image

In our Configuration file we can define "rules" that will help speed things up.

We should also define a rule like "No base64 then rot13" in our config file, as that's very unlikely.

These will, of course, be optional.

Entropy

If the previous 5 nodes do not show a normalisation in entropy then we are not "getting close to the plaintext" and should abandon that path, using alpha-beta pruning unless we are in speculative guessing mode.

Entropy "normalises" the closer we are to plaintext, look at this:

image

This is encrypted with Base64 -> Rot13 -> Vigenère (key: “key”).

The Shannon Entropy is 5.23.

When we decode the Base64:

image

It’s now 3.88. We can tell if we are going in the right direction by the normalisation of Entropy.

Therefore, we should:

  • Prioritise paths where their entropy "normalises" the most and:
  • Prune paths where the entropy does not normalise (or gets closer to the encrypted / compressed part).

This will allow us to:

  • Run longer since we are freeing memory
  • Be faster over larger inputs since we are deleting unnecessary paths.

This should be optional as I have not benchmarked or properly tested this idea.

🧙‍♂️ Knowing what the right decryption is

image

Entropy for Base64 falls around 3 - 5. We can use this information to determine what the next decryption method is. For example, Rot13's Entropy may be 3 or 4.

If we decrypt text and we get:

image

We can guess it's rot13 and do that next. We can do this because classical ciphers are not truly random and we can "guess" what it is by looking at it, the same for encodings.

We need to normalise the entropy (Cyberchef doesn't have this) by dividing by the length. This is because the shorter the text is, the less entropy it has (and vice-versa for larger texts).

This idea is not properly tested as Cyberchef does not support normalised entropy.

We can also do this using Index of Coincidence.

The 10% rule

We can decrypt 10% of a string and if the first 10% is valid (using Quadgrams we can see if it passes a basic check, or another checker like Entropy or Chi Squared) we decrypt the r

We should only do this for slower ciphers like:

  • Xor
  • Vigenere

❓ Decay

Generally the deeper we go, the less likely we are to decrypt.

If for some reason we are 40 levels deep on one node, and have not explored the other nodes we should prioritise them.

We can do this by using Exponential Decay to prioritise the paths which have not been searched yet.

Exponential decay - Wikipedia

Success Time, Fail Time, Likely Chance

We should also take into account:

  • Success Time

If it was successful, how long would it take on average?

  • Failure Time

In the absolute worst case where it tries all keys, how long does that take?

  • Likely Chance

How likely is it that this text is encrypted with X? We can work this out on-the-fly using Entropy or one of our previous calculations.

So... What is the heuristic?

We should take all of these and multiply them, and divide them by the failure time.

If it takes very long to fail, we would want to do that roughly last so we can try the quicker things (decoders) things.

We will need to heavily benchmark this and test, we can create a platform in our app to do this.

[BUG] When we return to the API, do not vectorise the result

/// // The result is an Option<DecoderResult> so we need to unwrap it
/// // The DecoderResult contains the text and the path
/// // The path is a vector of CrackResults which contains the decoder used and the keys used
/// // The text is a vector of strings because some decoders return more than 1 text (Caesar)
/// // Becuase the program has returned True, the first result is the plaintext (and it will only have 1 result).
/// // This is some tech debt we need to clean up
/// assert!(result.unwrap().text[0] == "The main function to call which performs the cracking.");

We should instead just return the text without a vector, or we should look at using this vector to store an array of answers for #129

test

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

[Feature] Simple countdown

We should create an index in the searcher which counts how many decodings there have been. After, say, 10k we exit the loop and report back to the user.

This is cheaper to do & easier than a threaded timer. For the timer we need to pause during human checker too.

[Feature] Return keys from ciphers and output them

Ares should return the keys found from ciphers like Caesar and print them along with the path.
CrackResult already has a key field:

// Key is optional as decoders do not use keys.
pub key: Option<&'static str>,

[Technical debt] Ares uses strings instead of bytes for everything

Ares currently uses strings in decoders, checkers, and the searcher. This causes a problem when attempting to add support for modern encryption like XOR as results will not be in UTF-8 strings but bytes.

Furthermore Ares should not assume that results from decoders will be in UTF-8. They could be in other encodings like UTF-16, UTF-32, etc.

The solution is to refactor Ares to use bytes for everything. This will allow us to implement modern encryption like XOR, AES, DES, etc. and support for other text encodings.

[BUG] When the input is the plaintext, do not return None as it breaks the API

    if check_if_input_text_is_plaintext(text) {
        debug!(
            "The input text provided to the program {} is the plaintext. Returning early.",
            text
        );
        return_early_because_input_text_is_plaintext();
        return None;
    }

This returns None, which tells the API it failed -- so it reports it failed. Instead we should return a Some<> of some kind, perhaps a DecoderResult which details It is the plaintext.

0.0.1 release

  1. human checker
  2. Add base decoders
    2.1. write docs on how to do (2) for everyone else
    2.2 fix any bugs or nasty things that come up with (2)
  3. create nice readme

[Proposal] Printing Module

Our printing is all over the place, we need to centralise all of our printing into a singular module. This enables us to:

  • Easily change what's printed
  • Make sure its accessible
  • Have easy-to-use functions to make printing simpler
  • Make it pretty 🤩

Proposal: Paths

Problem

When Ares goes Caesar -> Base64 -> Morse Code, we don't actually report this ordering. It's important we do as it's useful information!

What should this look like?

I think a final solution should look like:

Encryption methods used:
Caesar → Base64 → MorseCode

However, we should have a flag or consider this format:

Encryption methods used:
1. Caesar - Uryyb
2. Base64 - VXJ5eWI=
3. Morse Code - ...- -..- .--- ..... . .-- .. -...-

It is hard counting how many decodings there was, and in a CTF you might get asked "what was the 3rd encoding used?"

The text on the right is the text before it's decoded by the decoding function on the left.

We can easily add the numbers using a flag, but the text is bit harder to add -- especially if it is very long. Perhaps for a file output?

How can we do this?

We can create a struct containing both the encrypted text and the decoders used:

struct Text {
    text: &str,
    decoding_path: Vec<&str>
}

Where decoding_path is an array of decoders like ["Caesar", "Base64"].

In the future to add support for text as above, we can make text into an array too (but not now!)

reason is that, when creating new crackresult, we don't know about previous decodings used unless passed down from function argument, and if we want to pass it anyway, why not just keep it with text, that way we could return the same struct from bfs! ( name the struct plz haha )

[TECHNICAL DEBT] Add more information to our logs

I was reading our logs earlier and they're a bit confusing to read, we should add more info.

We should also look at using structured logging, but this is a bit hard for a CLI program without some sort of interface like Kibana. Perhaps a new tool I can make to view structured logs locally?

[TECHNICAL DEBT] Decoders use getter methods instead of something clever, like dereferencing the box

What?

In this commit:
fd4963e

The Crack trait has added 2 new methods:

  1. get_tags to get all the tags of a decoder
  2. get_name to get the name of the decoder

This is added to every decoder and will be very annoying to update or use in the future.

Why?

It would be nicer if we could simply loop over the boxes:

rust
    Decoders {
        components: vec![
            Box::new(reversedecoder),
            Box::new(base58_bitcoin),
            Box::new(base58_monero),
            Box::new(base58_ripple),
            Box::new(base58_flickr),
            Box::new(base64),
            Box::new(base91),
            Box::new(base64_url),
            Box::new(base65536),
            Box::new(binary),
            Box::new(hexadecimal),
            Box::new(base32),
            Box::new(morsecodedecoder),
            Box::new(atbashdecoder),
            Box::new(caesardecoder),
            Box::new(citrix_ctx1),
        ],
    }

Like:

for i in decoders{
i.name
}

But since they are in a box we do not know which fields they have. We can only guarantee it has a Crack trait (which lets us run the crack on the decoder).

The compiler does not know what's in the box as the Decoders is a trait object with Vec of Box<dyn Crack + Sync > and not our struct itself. It only knows that the things in the box have a Crack trait and a Sync trait but they could be completely different structs.

We did this because they are different implementations of the same struct (think like inheritance), which was a silly idea because Rust doesn't really support this 🙈

That commit is technical debt we have adopted to move faster in the moment and ship a product.

How will this affect us?

Anytime we want to add a new field to say, get popularity, we need to add it to all of the decoders. As they grow, this becomes ridiculous.

What can we do to fix this in the future?

There's somethings we can try...

Remove the boxes

We remove all the boxes and let the compiler yell at us and fix each error as they come.

I do not like this as much because the priority queue requires the ability to have structs with .Crack methods 🙈

Derive Macros

We can look into using Derive Macros that let us add additional scope onto existing structs:
https://doc.rust-lang.org/reference/procedural-macros.html#derive-macros

This seems promising, but requires actually studying how macros work 😢

todo

  • fix english checker
  • Make CI take in git lfs (done not tested)
  • provider checkers to decoders (@swanandx is doing this)
  • benchmark properly
  • Add documentation (doing this)

Structs for `Decoder` are repeated

In the /Decoders folder we have one struct that is copied and pasted across all the files.

That struct has different values, but we can't impl on it twice.

/
- struct.rs
- obj1.rs
- obj2.rs

Where obj1 and obj2 both impl (implement) functions onto the struct struct.rs, but are separate.

Basically, struct.rs is a parent class and I'd like to create children classes from it. But! You can't do that in Rust.

I think our only option right now is:

  • Create a macro that generates the struct in each file, so we have one "main struct" and we can generate structs from that.

[BUG] Not all decoders who can fail, fail

Some of our decoders (the one I know about being base64) can fail in other languages (python), but do not in Rust. It is essential we make them fail as it reduces the cardinality of our search algorithm, thus speeding the program up significantly.

[BUG] Add doc tests to checkers + Decoders

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

After 0.0.1

0.0.2

  • Benchmark it (so we can see how fast / slow it gets over time)
  • Quadgrams
  • Better English detection checker (currently it only matches 1 word)
  • Caesar cipher
  • Bytes

0.0.3

  • Nicer printing output
  • Nicer CLI tooling
  • 60 second timer
  • Crate API?

[BUG] Create benchmarks using criterion

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

[Bug] When printing possible plaintexts, it's all in lowercase and no punctuation

When changing the English checker to normalize results before checking them, it messed up the printing.
See here:

Possible plaintext: 'sphinx of black quartz judge my vow' (y/N):
y
SUCCESSFUL 😁
PLAINTEXT: "Sphinx of black quartz, judge my vow."

The final plaintext is fine but the possible plaintext is all in lowercase and with no punctuation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.