Code Monkey home page Code Monkey logo

raptorq's Introduction

raptorq

CI Crates Documentation PyPI License dependency status

Overview

Rust implementation of RaptorQ (RFC6330)

Recovery properties: Reconstruction probability after receiving K + h packets = 1 - 1/256^(h + 1). Where K is the number of packets in the original message, and h is the number of additional packets received. See "RaptorQ Technical Overview" by Qualcomm

Examples

See the examples/ directory for usage.

Benchmarks

The following were run on a Ryzen 9 5900X @ 3.70GHz

Symbol size: 1280 bytes (without pre-built plan)
symbol count = 10, encoded 127 MB in 0.259secs, throughput: 3953.4Mbit/s
symbol count = 100, encoded 127 MB in 0.217secs, throughput: 4716.3Mbit/s
symbol count = 250, encoded 127 MB in 0.215secs, throughput: 4757.9Mbit/s
symbol count = 500, encoded 127 MB in 0.216secs, throughput: 4724.6Mbit/s
symbol count = 1000, encoded 126 MB in 0.221secs, throughput: 4595.6Mbit/s
symbol count = 2000, encoded 126 MB in 0.230secs, throughput: 4415.8Mbit/s
symbol count = 5000, encoded 122 MB in 0.248secs, throughput: 3937.8Mbit/s
symbol count = 10000, encoded 122 MB in 0.289secs, throughput: 3379.1Mbit/s
symbol count = 20000, encoded 122 MB in 0.362secs, throughput: 2697.7Mbit/s
symbol count = 50000, encoded 122 MB in 0.482secs, throughput: 2026.1Mbit/s

Symbol size: 1280 bytes (with pre-built plan)
symbol count = 10, encoded 127 MB in 0.119secs, throughput: 8604.4Mbit/s
symbol count = 100, encoded 127 MB in 0.084secs, throughput: 12183.8Mbit/s
symbol count = 250, encoded 127 MB in 0.092secs, throughput: 11119.0Mbit/s
symbol count = 500, encoded 127 MB in 0.093secs, throughput: 10973.2Mbit/s
symbol count = 1000, encoded 126 MB in 0.093secs, throughput: 10920.7Mbit/s
symbol count = 2000, encoded 126 MB in 0.102secs, throughput: 9957.1Mbit/s
symbol count = 5000, encoded 122 MB in 0.111secs, throughput: 8797.9Mbit/s
symbol count = 10000, encoded 122 MB in 0.138secs, throughput: 7076.5Mbit/s
symbol count = 20000, encoded 122 MB in 0.178secs, throughput: 5486.3Mbit/s
symbol count = 50000, encoded 122 MB in 0.265secs, throughput: 3685.1Mbit/s

Symbol size: 1280 bytes
symbol count = 10, decoded 127 MB in 0.398secs using 0.0% overhead, throughput: 2572.7Mbit/s
symbol count = 100, decoded 127 MB in 0.323secs using 0.0% overhead, throughput: 3168.5Mbit/s
symbol count = 250, decoded 127 MB in 0.302secs using 0.0% overhead, throughput: 3387.2Mbit/s
symbol count = 500, decoded 127 MB in 0.290secs using 0.0% overhead, throughput: 3519.0Mbit/s
symbol count = 1000, decoded 126 MB in 0.309secs using 0.0% overhead, throughput: 3286.8Mbit/s
symbol count = 2000, decoded 126 MB in 0.326secs using 0.0% overhead, throughput: 3115.4Mbit/s
symbol count = 5000, decoded 122 MB in 0.340secs using 0.0% overhead, throughput: 2872.2Mbit/s
symbol count = 10000, decoded 122 MB in 0.374secs using 0.0% overhead, throughput: 2611.1Mbit/s
symbol count = 20000, decoded 122 MB in 0.452secs using 0.0% overhead, throughput: 2160.5Mbit/s
symbol count = 50000, decoded 122 MB in 0.625secs using 0.0% overhead, throughput: 1562.5Mbit/s

symbol count = 10, decoded 127 MB in 0.398secs using 5.0% overhead, throughput: 2572.7Mbit/s
symbol count = 100, decoded 127 MB in 0.324secs using 5.0% overhead, throughput: 3158.8Mbit/s
symbol count = 250, decoded 127 MB in 0.303secs using 5.0% overhead, throughput: 3376.1Mbit/s
symbol count = 500, decoded 127 MB in 0.291secs using 5.0% overhead, throughput: 3506.9Mbit/s
symbol count = 1000, decoded 126 MB in 0.315secs using 5.0% overhead, throughput: 3224.2Mbit/s
symbol count = 2000, decoded 126 MB in 0.328secs using 5.0% overhead, throughput: 3096.4Mbit/s
symbol count = 5000, decoded 122 MB in 0.349secs using 5.0% overhead, throughput: 2798.2Mbit/s
symbol count = 10000, decoded 122 MB in 0.402secs using 5.0% overhead, throughput: 2429.3Mbit/s
symbol count = 20000, decoded 122 MB in 0.500secs using 5.0% overhead, throughput: 1953.1Mbit/s
symbol count = 50000, decoded 122 MB in 0.746secs using 5.0% overhead, throughput: 1309.1Mbit/s

The following were run on an Intel Core i5-6600K @ 3.50GHz, as of raptorq version 1.6.4

Symbol size: 1280 bytes (without pre-built plan)
symbol count = 10, encoded 127 MB in 0.423secs, throughput: 2420.6Mbit/s
symbol count = 100, encoded 127 MB in 0.393secs, throughput: 2604.2Mbit/s
symbol count = 250, encoded 127 MB in 0.373secs, throughput: 2742.5Mbit/s
symbol count = 500, encoded 127 MB in 0.362secs, throughput: 2819.1Mbit/s
symbol count = 1000, encoded 126 MB in 0.371secs, throughput: 2737.5Mbit/s
symbol count = 2000, encoded 126 MB in 0.401secs, throughput: 2532.7Mbit/s
symbol count = 5000, encoded 122 MB in 0.432secs, throughput: 2260.6Mbit/s
symbol count = 10000, encoded 122 MB in 0.492secs, throughput: 1984.9Mbit/s
symbol count = 20000, encoded 122 MB in 0.642secs, throughput: 1521.1Mbit/s
symbol count = 50000, encoded 122 MB in 0.862secs, throughput: 1132.9Mbit/s

Symbol size: 1280 bytes (with pre-built plan)
symbol count = 10, encoded 127 MB in 0.213secs, throughput: 4807.2Mbit/s
symbol count = 100, encoded 127 MB in 0.141secs, throughput: 7258.4Mbit/s
symbol count = 250, encoded 127 MB in 0.153secs, throughput: 6685.9Mbit/s
symbol count = 500, encoded 127 MB in 0.162secs, throughput: 6299.4Mbit/s
symbol count = 1000, encoded 126 MB in 0.165secs, throughput: 6155.3Mbit/s
symbol count = 2000, encoded 126 MB in 0.184secs, throughput: 5519.7Mbit/s
symbol count = 5000, encoded 122 MB in 0.214secs, throughput: 4563.4Mbit/s
symbol count = 10000, encoded 122 MB in 0.281secs, throughput: 3475.3Mbit/s
symbol count = 20000, encoded 122 MB in 0.373secs, throughput: 2618.1Mbit/s
symbol count = 50000, encoded 122 MB in 0.518secs, throughput: 1885.3Mbit/s

Symbol size: 1280 bytes
symbol count = 10, decoded 127 MB in 0.610secs using 0.0% overhead, throughput: 1678.6Mbit/s
symbol count = 100, decoded 127 MB in 0.484secs using 0.0% overhead, throughput: 2114.5Mbit/s
symbol count = 250, decoded 127 MB in 0.458secs using 0.0% overhead, throughput: 2233.5Mbit/s
symbol count = 500, decoded 127 MB in 0.438secs using 0.0% overhead, throughput: 2329.9Mbit/s
symbol count = 1000, decoded 126 MB in 0.450secs using 0.0% overhead, throughput: 2256.9Mbit/s
symbol count = 2000, decoded 126 MB in 0.485secs using 0.0% overhead, throughput: 2094.1Mbit/s
symbol count = 5000, decoded 122 MB in 0.534secs using 0.0% overhead, throughput: 1828.8Mbit/s
symbol count = 10000, decoded 122 MB in 0.621secs using 0.0% overhead, throughput: 1572.6Mbit/s
symbol count = 20000, decoded 122 MB in 0.819secs using 0.0% overhead, throughput: 1192.4Mbit/s
symbol count = 50000, decoded 122 MB in 1.116secs using 0.0% overhead, throughput: 875.1Mbit/s

symbol count = 10, decoded 127 MB in 0.609secs using 5.0% overhead, throughput: 1681.3Mbit/s
symbol count = 100, decoded 127 MB in 0.490secs using 5.0% overhead, throughput: 2088.6Mbit/s
symbol count = 250, decoded 127 MB in 0.463secs using 5.0% overhead, throughput: 2209.4Mbit/s
symbol count = 500, decoded 127 MB in 0.443secs using 5.0% overhead, throughput: 2303.6Mbit/s
symbol count = 1000, decoded 126 MB in 0.464secs using 5.0% overhead, throughput: 2188.8Mbit/s
symbol count = 2000, decoded 126 MB in 0.490secs using 5.0% overhead, throughput: 2072.7Mbit/s
symbol count = 5000, decoded 122 MB in 0.555secs using 5.0% overhead, throughput: 1759.6Mbit/s
symbol count = 10000, decoded 122 MB in 0.667secs using 5.0% overhead, throughput: 1464.1Mbit/s
symbol count = 20000, decoded 122 MB in 0.830secs using 5.0% overhead, throughput: 1176.6Mbit/s
symbol count = 50000, decoded 122 MB in 1.328secs using 5.0% overhead, throughput: 735.4Mbit/s

The following were run on a Raspberry Pi 3 B+ (Cortex-A53 @ 1.4GHz)

Symbol size: 1280 bytes (without pre-built plan)
symbol count = 10, encoded 127 MB in 5.078secs, throughput: 201.6Mbit/s
symbol count = 100, encoded 127 MB in 3.966secs, throughput: 258.1Mbit/s
symbol count = 250, encoded 127 MB in 4.293secs, throughput: 238.3Mbit/s
symbol count = 500, encoded 127 MB in 4.451secs, throughput: 229.3Mbit/s
symbol count = 1000, encoded 126 MB in 4.606secs, throughput: 220.5Mbit/s
symbol count = 2000, encoded 126 MB in 5.127secs, throughput: 198.1Mbit/s
symbol count = 5000, encoded 122 MB in 5.615secs, throughput: 173.9Mbit/s
symbol count = 10000, encoded 122 MB in 6.321secs, throughput: 154.5Mbit/s
symbol count = 20000, encoded 122 MB in 7.450secs, throughput: 131.1Mbit/s
symbol count = 50000, encoded 122 MB in 9.407secs, throughput: 103.8Mbit/s

Symbol size: 1280 bytes (with pre-built plan)
symbol count = 10, encoded 127 MB in 3.438secs, throughput: 297.8Mbit/s
symbol count = 100, encoded 127 MB in 2.476secs, throughput: 413.3Mbit/s
symbol count = 250, encoded 127 MB in 2.908secs, throughput: 351.8Mbit/s
symbol count = 500, encoded 127 MB in 3.085secs, throughput: 330.8Mbit/s
symbol count = 1000, encoded 126 MB in 3.284secs, throughput: 309.3Mbit/s
symbol count = 2000, encoded 126 MB in 3.700secs, throughput: 274.5Mbit/s
symbol count = 5000, encoded 122 MB in 4.045secs, throughput: 241.4Mbit/s
symbol count = 10000, encoded 122 MB in 4.451secs, throughput: 219.4Mbit/s
symbol count = 20000, encoded 122 MB in 4.948secs, throughput: 197.4Mbit/s
symbol count = 50000, encoded 122 MB in 6.078secs, throughput: 160.7Mbit/s

Symbol size: 1280 bytes
symbol count = 10, decoded 127 MB in 6.561secs using 0.0% overhead, throughput: 156.1Mbit/s
symbol count = 100, decoded 127 MB in 4.936secs using 0.0% overhead, throughput: 207.3Mbit/s
symbol count = 250, decoded 127 MB in 5.206secs using 0.0% overhead, throughput: 196.5Mbit/s
symbol count = 500, decoded 127 MB in 5.298secs using 0.0% overhead, throughput: 192.6Mbit/s
symbol count = 1000, decoded 126 MB in 5.565secs using 0.0% overhead, throughput: 182.5Mbit/s
symbol count = 2000, decoded 126 MB in 6.309secs using 0.0% overhead, throughput: 161.0Mbit/s
symbol count = 5000, decoded 122 MB in 6.805secs using 0.0% overhead, throughput: 143.5Mbit/s
symbol count = 10000, decoded 122 MB in 7.517secs using 0.0% overhead, throughput: 129.9Mbit/s
symbol count = 20000, decoded 122 MB in 8.875secs using 0.0% overhead, throughput: 110.0Mbit/s
symbol count = 50000, decoded 122 MB in 11.253secs using 0.0% overhead, throughput: 86.8Mbit/s

symbol count = 10, decoded 127 MB in 6.157secs using 5.0% overhead, throughput: 166.3Mbit/s
symbol count = 100, decoded 127 MB in 4.842secs using 5.0% overhead, throughput: 211.4Mbit/s
symbol count = 250, decoded 127 MB in 5.213secs using 5.0% overhead, throughput: 196.2Mbit/s
symbol count = 500, decoded 127 MB in 5.328secs using 5.0% overhead, throughput: 191.5Mbit/s
symbol count = 1000, decoded 126 MB in 5.630secs using 5.0% overhead, throughput: 180.4Mbit/s
symbol count = 2000, decoded 126 MB in 6.364secs using 5.0% overhead, throughput: 159.6Mbit/s
symbol count = 5000, decoded 122 MB in 7.035secs using 5.0% overhead, throughput: 138.8Mbit/s
symbol count = 10000, decoded 122 MB in 8.165secs using 5.0% overhead, throughput: 119.6Mbit/s
symbol count = 20000, decoded 122 MB in 9.929secs using 5.0% overhead, throughput: 98.4Mbit/s
symbol count = 50000, decoded 122 MB in 14.399secs using 5.0% overhead, throughput: 67.8Mbit/s

Public API

Note that the additional classes exported by the benchmarking feature flag are not considered part of this crate's public API. Breaking changes to those classes may occur without warning. The flag is only provided so that internal classes can be used in this crate's benchmarks.

Python bindings

The Python bindings are generated using pyo3.

Some operating systems require additional packages to be installed.

$ sudo apt install python3-dev

maturin is recommended for building the Python bindings in this crate.

$ pip install maturin
$ maturin build --cargo-extra-args="--features python"

Alternatively, refer to the Building and Distribution section in the pyo3 user guide. Note, you must pass the --cargo-extra-args="--features python" argument to Maturin when building this crate to enable the Python binding features.

License

Licensed under

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you shall be licensed as above, without any additional terms or conditions.

raptorq's People

Contributors

anthonymikh avatar cberner avatar ckaran avatar dutchghost avatar felixschorer avatar fossabot avatar jonil avatar lucab avatar mlegner avatar pgolovkin avatar sikabo avatar slesarew avatar vdagonneau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

raptorq's Issues

Rabin IDA

Hi, great work here. Fascinating too.
I have a use case for a 5 of 7 IDA for data up to 1Mb. i.e. end up with 7 pieces that 5 are enough to recreate the data.

I was pulling together a basic Rabin IDA for this, but looking quickly here I wonder if it's possible with the config options to actually make that happen. (what options should I set)

2nd question is, should I use this crate for that?

Fountain code which is an order of magnitude faster than RaptorQ.

I was looking at your RaptorQ implementation to evaluate the speed of an original fountain code which I have developed. (It's mathematically original, not based on RS, LT or tornado codes.)

On encoding (with a small number of symbols) I see an order of magnitude better performance.

  • Do you have any significant optimisations which may make this RaptorQ implementation faster? i.e. Is it already representative of the state of the art?
  • Are you interested in following this work? (I need to code equivalent benchmarks and create a file-focused command-line fountain and combiner.) I couldn't see your email address on github.

Potential Qualcomm legal liability?

There may be a legal liability when using RaptorQ for commercial purposes - Qualcomm can sue you.
Hopefully I'm wrong and someone can point that out.

This is Qualcomm's IPR statement: https://datatracker.ietf.org/ipr/2554/

Relevant section:

If the technology in RFC6330 โ€œRaptorQ Forward Error Correction Scheme for Object Deliveryโ€ is included in a standards track or experimental document adopted by the IETF, and any claim of any patent issued from the above mentioned patents, patent applications or corresponding patents and patent applications is required for the implementation of any device that (a) fully implements such adopted standards track or experimental document; and (b) does not implement any wireless wide-area standard, Qualcomm will not assert any such claim against any party for making, using, selling, importing or offering for sale such device but solely with respect to the implementation of such adopted standards track or experimental document, provided, however that Qualcomm retains the right to assert its patent(s) issued on the above mentioned application or corresponding patent applications (including the right to claim past royalties) against any party that asserts a patent it owns or controls (either directly or indirectly) against any products of Qualcomm or any products of any of Qualcomm's Affiliates either alone or in combination with other products; and Qualcomm retains the right to assert its patents or application(s) against any product or portion thereof that does not fully implement the IETF standards track or experimental document.

Does this implementation "fully implement the IETF standards track or experimental document"?

Oh if you are a corporate entity this IPR is probably cancer, you are giving Qualcomm license to infringe on any of your patents and going after them means they go after you. But I'm more interested in how this affects small businesses.

no alloc API

I am using this crate in a embedded project and I would prefer that raptorq have an option for a no alloc API. Besides being better for safety and bare metal environments, it might actually give a small speed boost as allocations are slow in the context of tight code.

I was planning on doing the work, but if you would be interested in merging it back in I figured it would be better to talk about it before coming up with a design.

Thanks for the great lib either way.

Making systematic_constans API public

Hey, I'm working on GStreamer plugin for RaptorQ RTP FEC using this crate. One of the requirement defined in relevant RFCs is to provide information about kmax or K' by the encoder. I can see there is a function to obtain that but it is currently private:

systematic_constants::extended_source_block_symbols

Would you be willing to make that public? Of course I can provide a PR.

Thanks for all your amazing work!

How to run the files in examples folder

Hello Christopher,

Sorry for the naive questions. I'm new to Rust. When I run rustc main.rs or python3 main.py in example folder, it gives me error like library not found. Can you give us more instruction how to play with the main.rs and main.py examples in this project?

Thanks,
PJ

matrix

hello ,This is raptor or raptorq. If it is raptorq, I have not found the matrix related calculation.

API usage

Sorry for the silly ticket, but I'm having a hard time trying to figure out how the API of this crate is supposed to be consumed.

I'm particular, I'm failing to get a structured view to answer to the following:

  • why is EncodingPacket private?
  • how can I get encoded bytes out of the Encoder?
  • how can I decode a slice of bytes (e.g. from a file)?
  • can encoding/decoding work in a streaming way or do they need all input data upfront?

Add new function for Encoder taking an ObjectTransmissionInformation

Hey there!

I'm working on an application that blends RaptorQ and STROBE for efficient, secure transmission of files. In this combination it is ideal if the config can be created and sent to the other party first - before the final data to be encoded is ready.

Are you open to having a Encoder::new that takes the config and data?

Misaligned pointer errors with dev profile on Apple M2

When I generate a SourceBlockEncodingPlan with some specific values of symbol_count and compile with the dev profile on my MacBook Pro with an M2 processor, I get a "misaligned pointer dereference" somewhere in the SIMD code:

thread 'main' panicked at ~/.cargo/registry/src/index.crates.io-6f17d22bba15001f/raptorq-1.8.0/src/octets.rs:692:5:
misaligned pointer dereference: address must be a multiple of 0x8 but is 0x148e0967d
stack backtrace:
   0: rust_begin_unwind
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:645:5
   1: core::panicking::panic_nounwind_fmt::runtime
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/panicking.rs:110:18
   2: core::panicking::panic_nounwind_fmt
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/panicking.rs:122:9
   3: core::panicking::panic_misaligned_pointer_dereference
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/panicking.rs:221:5
   4: raptorq::octets::store_neon
             at ~/.cargo/registry/src/index.crates.io-6f17d22bba15001f/raptorq-1.8.0/src/octets.rs:692:5
   5: raptorq::octets::fused_addassign_mul_scalar_binary_neon
             at ~/.cargo/registry/src/index.crates.io-6f17d22bba15001f/raptorq-1.8.0/src/octets.rs:206:9
   6: raptorq::octets::fused_addassign_mul_scalar_binary
             at ~/.cargo/registry/src/index.crates.io-6f17d22bba15001f/raptorq-1.8.0/src/octets.rs:116:24
   7: raptorq::octet_matrix::DenseOctetMatrix::fma_sub_row
             at ~/.cargo/registry/src/index.crates.io-6f17d22bba15001f/raptorq-1.8.0/src/octet_matrix.rs:41:9
   8: raptorq::pi_solver::IntermediateSymbolDecoder<T>::fma_rows_with_pi
             at ~/.cargo/registry/src/index.crates.io-6f17d22bba15001f/raptorq-1.8.0/src/pi_solver.rs:1242:17
   9: raptorq::pi_solver::IntermediateSymbolDecoder<T>::first_phase
             at ~/.cargo/registry/src/index.crates.io-6f17d22bba15001f/raptorq-1.8.0/src/pi_solver.rs:762:21
  10: raptorq::pi_solver::IntermediateSymbolDecoder<T>::execute
             at ~/.cargo/registry/src/index.crates.io-6f17d22bba15001f/raptorq-1.8.0/src/pi_solver.rs:1282:42
  11: raptorq::pi_solver::fused_inverse_mul_symbols
             at ~/.cargo/registry/src/index.crates.io-6f17d22bba15001f/raptorq-1.8.0/src/pi_solver.rs:1334:5
  12: raptorq::encoder::gen_intermediate_symbols
             at ~/.cargo/registry/src/index.crates.io-6f17d22bba15001f/raptorq-1.8.0/src/encoder.rs:378:16
  13: raptorq::encoder::SourceBlockEncodingPlan::generate
             at ~/.cargo/registry/src/index.crates.io-6f17d22bba15001f/raptorq-1.8.0/src/encoder.rs:185:24
  14: raptorq_playground::main
             at ./src/main.rs:46:16
  15: core::ops::function::FnOnce::call_once
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread caused non-unwinding panic. aborting.

My Rust versions are

$ cargo --version
cargo 1.76.0 (c84b36747 2024-01-18)
$ rustc --version
rustc 1.76.0 (07dca489a 2024-02-04)

Minimal working example:

use raptorq::SourceBlockEncodingPlan;

fn main() {
    let _plan = SourceBlockEncodingPlan::generate(20);
}

Some other example values that cause the same error up to 200:
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200

When compiling with --release, the errors don't occur. Is this something that is expected? Any mitigations besides always using a release build?

Documentation

Hi all,

i am trying to use the library with python bindings. However i am finding some issues and I cannot find a documentation in which the methods available are described. Is there one? Even if it is for rust, it is good!

Remove ad-hoc serialization code

PR #27 ensures that most of the structs in the crate have serde::{Serialize, Deserialize} implemented on them. That means that any ad-hoc serialization code (such as ObjectTransmissionInformation::{serialize(), deserialize()}) are no longer necessary, and may actually be detrimental in the long run. Although it will be a breaking change requiring a major version bump, it may be a good idea to remove the ad-hoc code.

CPython wrapper

I have written a small wrapper to compile the library to a native CPython module.
Currently, it only features the high level Encoder/Decoder API.
The code is part of that repository.

Shall I open a PR to integrate the code into this repository?

Webassembly support?

Is Webassembly currently supported? Moreover, Rust support for wasm simd intrinsics has been stabilized recently: rust-lang/rust#86204, so it would be great if it could be implemented

error[E0658]: `while` is not allowed in a `const fn`

Want to run bench to see results, but can`t.

rustc 1.43.0
error[E0658]: `while` is not allowed in a `const fn`
   --> src/octet.rs:116:9
    |
116 | /         while j < 256 {
117 | |             result[i][j] = const_mul(i, j);
118 | |             j += 1;
119 | |         }
    | |_________^
    |
    = note: see issue #52000 <https://github.com/rust-lang/rust/issues/52000> for more information

   Compiling cast v0.2.3
   Compiling quote v1.0.7
error: aborting due to 6 previous errors

For more information about this error, try `rustc --explain E0658`.
error: could not compile `raptorq`.

To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed

Streaming API

Has there been any plans to extend the API to work with blocks instead of passing the whole lot in a single [u8] on the Encoder side?

Hide more stuff

Just about everything in the crate has been declared pub, but I'm not sure if it really needs to be. If there are portions that should only be visible within the crate, then the proper marker is pub(crate). That will reduce the amount of code that is required to be kept fixed and public. Note that if you make anything non-public that was public, then you'll need to bump the major version number as it is a breaking change.

ObjectTransmissionInformation question

Hi I am configuring ObjectTransmissionInformation as follows:

let config = raptorq::ObjectTransmissionInformation::with_defaults(length as u64, 256);
let mut decoder = raptorq::Decoder::new(config);

And with that i am assuming that max_packet_size parameter stands for "the maximum size of the frame that can be encountered". Then, I am trying to decode a sequence that consists only of packets 128 bytes long and run into this assertion:

thread 'main' panicked at 'assertion failed: `(left == right)`
  left: `256`,
 right: `128`', raptorq-1.7.0/src/octets.rs:695:5

Am I misunderstanding the purpose of max_packet_size parameter?

inversion performance improvements

Some ideas to improve performance of precode matrix inversion:
  • During "First Phase", you will find that r == 2 the vast majority of the time, optimizing for this and r <= 3 will improve performance.
  • Given enough overhead during decoding (eg. rank(gf2_matrix) == L), inversion can stay in gf2 and avoid expensive hdpc operations completely.
  • With some care, you can work only on the dense submatrix U from "Second Phase" and onwards.
  • If you are caching the schedule of operations or "prebuilt plan", you can leverage it during "Third Phase" of the initial computation to replay the required operations from the plan to create a sparse U upper instead of matrix multiplication as the rfc implies.

Hopefully these are helpful insights.

Encoding very large files

Are there any plans to support encoding a file too large to fit into memory, without simply encoding pieces separately? If not, is there an idea about how straightforward that would be to implement?

Performance degrades when symbols count is big

Hi, I've got a c++ implementation of RaptorQ. It's a part of another project and private now, but it will be released eventually.

First, great work. You have achieved a really nice performance on 10K bytes.

But I have concerns about performance with bigger symbol_count.
For 10K data and symbol_size=512, symbol_count should be 20.

I've tried to change elements to 512 * 50000 in you encode 10KB benchmark, and it doesn't seem to finish in a reasonable amount of time.

Am I doing something wrong?
I'm running it with cargo bench --features benchmarking.
Have you tested that it did make the codec to scale linearly to larger blocks?

As I recall rfc is quite cryptic about details critical for performance.

When to call decoding, duplicated outputs

Hi, I was using the python binding to encode streaming data before sending to the other side through a one directional link with UDP packets.

I was wondering when should I call the decode function. Assuming, I have a block of data (same size as MTU, 4000), then with 1 repair block (same size as MTU, 4000). Then if I transmit these two packets to the other side, when should I call the decode function, as I could have both packets, or in case of loss, I would only have one.

Packets are received one by one on the other side, so I need to know when to call the decode function and when to buffer the packets before calling decode. I found that if I call on every packet, then I will end up with duplicated data block, as the source packet and repair packet will lead to the same output.

Please enlighten me on the possible design for sending data through UDP packets.

cargo bench fails to build

With the current HEAD, cargo bench fails to build:

$ git log -n 1
commit 95b6b5ae9100d2af9518f76450dba93bcad79902 (HEAD -> master, github/master, github/HEAD)
Author: Christopher Berner <[email protected]>
Date:   Sat Aug 29 22:35:08 2020 -0700

    Make serde support optional

$ git status
On branch master
Your branch is up to date with 'github/master'.

nothing to commit, working tree clean

$ cargo bench
   Compiling autocfg v1.0.1
   Compiling libc v0.2.77
โ‹ฎ
   Compiling maybe-uninit v2.0.0
   Compiling cfg-if v0.1.10
   Compiling lazy_static v1.4.0
   Compiling serde v1.0.116
   Compiling semver-parser v0.7.0
   Compiling memchr v2.3.3
   Compiling byteorder v1.3.4
   Compiling proc-macro2 v1.0.21
   Compiling ryu v1.0.5
   Compiling unicode-xid v0.2.1
   Compiling scopeguard v1.1.0
   Compiling rayon-core v1.8.0
   Compiling getrandom v0.1.15
   Compiling either v1.6.0
   Compiling bitflags v1.2.1
   Compiling serde_json v1.0.57
   Compiling syn v1.0.40
   Compiling itoa v0.4.6
   Compiling serde_derive v1.0.116
   Compiling unicode-width v0.1.8
   Compiling hamming v0.1.3
   Compiling primal-estimate v0.2.1
   Compiling half v1.6.0
   Compiling same-file v1.0.6
   Compiling regex-syntax v0.6.18
   Compiling ppv-lite86 v0.2.9
   Compiling oorandom v11.1.2
   Compiling raptorq v1.4.2 (/home/chai/src/raptorq)
   Compiling semver v0.9.0
   Compiling crossbeam-utils v0.7.2
   Compiling memoffset v0.5.5
   Compiling num-traits v0.2.12
   Compiling crossbeam-epoch v0.8.2
   Compiling num-integer v0.1.43
   Compiling rayon v1.4.0
   Compiling itertools v0.9.0
   Compiling textwrap v0.11.0
   Compiling primal-bit v0.2.4
   Compiling walkdir v2.3.1
   Compiling rustc_version v0.2.3
   Compiling regex v1.3.9
   Compiling smallvec v0.6.13
   Compiling num_cpus v1.13.0
   Compiling atty v0.2.14
   Compiling csv-core v0.1.10
   Compiling cast v0.2.3
   Compiling serde_cbor v0.11.1
   Compiling quote v1.0.7
   Compiling regex-automata v0.1.9
   Compiling clap v2.33.3
   Compiling primal-sieve v0.2.9
   Compiling threadpool v1.8.1
   Compiling rand_core v0.5.1
   Compiling crossbeam-channel v0.4.4
   Compiling plotters v0.2.15
   Compiling bstr v0.2.13
   Compiling tinytemplate v1.1.0
   Compiling rand_chacha v0.2.2
   Compiling crossbeam-deque v0.7.3
   Compiling primal-check v0.2.3
   Compiling csv v1.1.3
   Compiling rand v0.7.3
   Compiling criterion-plot v0.4.3
   Compiling primal v0.2.3
   Compiling criterion v0.3.3
error[E0432]: unresolved import `raptorq::generate_constraint_matrix`
 --> benches/matrix_sparsity.rs:1:5
  |
1 | use raptorq::generate_constraint_matrix;
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `generate_constraint_matrix` in the root

error[E0432]: unresolved import `raptorq::IntermediateSymbolDecoder`
 --> benches/matrix_sparsity.rs:2:5
  |
2 | use raptorq::IntermediateSymbolDecoder;
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `IntermediateSymbolDecoder` in the root

error[E0432]: unresolved import `raptorq::Octet`
 --> benches/matrix_sparsity.rs:3:5
  |
3 | use raptorq::Octet;
  |     ^^^^^^^^^-----
  |     |        |
  |     |        help: a similar name exists in the module (notice the capitalization): `octet`
  |     no `Octet` in the root

error[E0432]: unresolved import `raptorq::Symbol`
 --> benches/matrix_sparsity.rs:4:5
  |
4 | use raptorq::Symbol;
  |     ^^^^^^^^^------
  |     |        |
  |     |        help: a similar name exists in the module (notice the capitalization): `symbol`
  |     no `Symbol` in the root

error[E0432]: unresolved imports `raptorq::extended_source_block_symbols`, `raptorq::BinaryMatrix`, `raptorq::SparseBinaryMatrix`
 --> benches/matrix_sparsity.rs:5:15
  |
5 | use raptorq::{extended_source_block_symbols, BinaryMatrix, SparseBinaryMatrix};
  |               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  ^^^^^^^^^^^^  ^^^^^^^^^^^^^^^^^^ no `SparseBinaryMatrix` in the root
  |               |                              |
  |               |                              no `BinaryMatrix` in the root
  |               no `extended_source_block_symbols` in the root

error[E0432]: unresolved import `raptorq::Symbol`
  --> benches/codec_benchmark.rs:10:5
   |
10 | use raptorq::Symbol;
   |     ^^^^^^^^^------
   |     |        |
   |     |        help: a similar name exists in the module (notice the capitalization): `symbol`
   |     no `Symbol` in the root

error[E0432]: unresolved import `raptorq::Octet`
  --> benches/codec_benchmark.rs:11:46
   |
11 | use raptorq::{ObjectTransmissionInformation, Octet};
   |                                              ^^^^^
   |                                              |
   |                                              no `Octet` in the root
   |                                              help: a similar name exists in the module (notice the capitalization): `octet`

error: aborting due to 5 previous errors

For more information about this error, try `rustc --explain E0432`.
error: could not compile `raptorq`.

To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: aborting due to 2 previous errors

For more information about this error, try `rustc --explain E0432`.
error: build failed

Possible inconsistency with RFC 6330: use of ISI vs ESI in `PayloadId`

First of all, thank you very much for maintaining this efficient RaptorQ library, this is highly appreciated. ๐Ÿ’ฏ

While integrating this into one of my projects, I believe I found an inconsistency of the library's behavior with RFC 6330.

RFC 6330, Section 5.3.1 states the following (emphasis added):

For a given source block of K source symbols, for encoding and decoding purposes, the source block is augmented with K'-K additional padding symbols [...]
The encoding symbol ID (ESI) is used by a sender and receiver to identify the encoding symbols of a source block, [...]. For a source block with K source symbols, the ESIs for the source symbols are 0, 1, 2, ..., K-1, and the ESIs for the repair symbols are K, K+1, K+2, .... Using the ESI for identifying encoding symbols in transport ensures that the ESI values continue consecutively between the source and repair symbols.

What I get when running the code, however, is a gap between the source and repair symbols, which should only be there for the internal symbol ID (ISI):

For purposes of encoding and decoding data, the value of K' derived from K is used as the number of source symbols of the extended source block upon which encoding and decoding operations are performed, where the K' source symbols consist of the original K source symbols and an additional K'-K padding symbols. The Internal Symbol ID (ISI) is used by the encoder and decoder to identify the symbols associated with the extended source block, i.e., for generating encoding symbols and for decoding. For a source block with K original source symbols, the ISIs for the original source symbols are 0, 1, 2, ..., K-1, the ISIs for the K'-K padding symbols are K, K+1, K+2, ..., K'-1, and the ISIs for the repair symbols are K', K'+1, K'+2, ....

AFAIU the RFC, the PayloadId should contain the ESI, but the library actually adds the ISI (in SourceBlockEncoder::repair_packets). Using the ESI instead of the ISI would also prevent potential panics in decode.

Example code:

use raptorq::{Encoder, ObjectTransmissionInformation};

fn main() {
    let encoder = Encoder::new(
        &[1, 2, 3],
        ObjectTransmissionInformation::new(3, 1, 1, 0, 1),
    );
    for packet in encoder.get_encoded_packets(3) {
        println!("{}", packet.payload_id().encoding_symbol_id());
    }
}

Example output (K'=10 for K=3):

0
1
2
10
11
12

Confusion of performance difference between encode_benchmarks.rs and main.rs

Hi Christopher,

I modified the main.rs to get the encoding time for 100 symbols and each symbol in 1280 bytes as in your benchmarks. I got a time is 0.078s for encoding.

use rand::seq::SliceRandom;
use rand::Rng;
use raptorq::{Decoder, Encoder, EncodingPacket};
use std::time::Instant;

fn main() {
    // Generate some random data to send
    let mut data: Vec<u8> = vec![0; 100 * 1280]; // Change to 100 symbols, each is 1280 bytes
    for i in 0..data.len() {
        data[i] = rand::thread_rng().gen();
    }
    let now = Instant::now();
    let encoder = Encoder::with_defaults(&data, 1280); // To 1280 bytes

    // Perform the encoding, and serialize to Vec<u8> for transmission
    let mut packets: Vec<Vec<u8>> = encoder
        .get_encoded_packets(15)
        .iter()
        .map(|packet| packet.serialize())
        .collect();
    let elapsed = now.elapsed();
    let elapsed = elapsed.as_secs() as f64 + elapsed.subsec_millis() as f64 * 0.001;
    println!("Total time consumed in seconds: {}", elapsed);
}

However, from your bench result, the time is 0.393s / ((127 * 1024 * 1024 Bytes) / (100 * 1280)) = 0.000378s.

Symbol size: 1280 bytes (without pre-built plan)
...
symbol count = 100, encoded 127 MB in 0.393secs, throughput: 2604.2Mbit/s
...

There are around 200x speed difference. I believe there is something I'm missing between the usages in main.rs and encode_benchmarks.rs. Can you help me understand why it is?

Thanks,
PJ

[question]About the recovery strategies

Hi Cberner,

It seems that your implementation of recovery strategies is for devices that have enough RAM, (please correct me if I am wrong).
According to RFC6330 4.4.3 's last paragraph, There is an alternative approach for RAM limited devices, what do you think of it ? 

Have a nice day. thanks in advance.

100% cpu usage

Running example code with data over 1MB causes 100% CPU usage

Python build doesn't work since v1.2.1

Hello,

I've tried to build your module for python but it doesn't seems to work. Here's what I've tried.

lulu@Tsukuyomi:~/raptorq-1.3.0$ maturin build
๐Ÿ’ฅ maturin failed
  Caused by: Couldn't find any bindings; Please specify them with --bindings/-b

lulu@Tsukuyomi:~/raptorq-1.3.0$ maturin build -b pyo3
๐Ÿ’ฅ maturin failed
  Caused by: The bindings crate pyo3 was not found in the dependencies list

After these two fails, I've tried to generate the .so file and use it directly in python.

lulu@Tsukuyomi:~/raptorq-1.3.0$ maturin build
๐Ÿ’ฅ maturin failed
  Caused by: Couldn't find any bindings; Please specify them with --bindings/-b
lulu@Tsukuyomi:~/raptorq-1.3.0$ cargo build --release
   Compiling proc-macro2 v1.0.9
   Compiling unicode-xid v0.2.0
   Compiling syn v1.0.16
   Compiling serde v1.0.104
   Compiling quote v1.0.3
   Compiling serde_derive v1.0.104
   Compiling raptorq v1.3.0 (/home/lulu/raptorq-1.3.0)
warning: unnecessary parentheses around block return value
  --> src/matrix.rs:87:9
   |
87 |         (mask - 1)
   |         ^^^^^^^^^^ help: remove these parentheses
   |
   = note: `#[warn(unused_parens)]` on by default

    Finished release [optimized + debuginfo] target(s) in 46.88s

lulu@Tsukuyomi:~/raptorq-1.3.0$ mv target/release/libraptorq.so examples/raptorq.so
lulu@Tsukuyomi:~/raptorq-1.3.0$ cd examples/
lulu@Tsukuyomi:~/raptorq-1.3.0/examples$ python3.7 main.py 
Traceback (most recent call last):
  File "main.py", line 3, in <module>
    from raptorq import Encoder, Decoder
ImportError: dynamic module does not define module export function (PyInit_raptorq)

After I noticed that your module is also available on pypi, so I tried to install it.

lulu@Tsukuyomi:~/raptorq-1.3.0/examples$ sudo pip3 install raptorq
Collecting raptorq
  Using cached https://files.pythonhosted.org/packages/d6/bb/1b988168e61812a4005857441ac07953c4a1124d790ea3e6ba9c05c13756/raptorq-1.3.0.tar.gz
  Installing build dependencies ... done
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/usr/lib/python3.7/tokenize.py", line 447, in open
        buffer = _builtin_open(filename, 'rb')
    FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-install-910ays70/raptorq/setup.py'
    
    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-910ays70/raptorq/

I've tried this with rustc stable 1.41.1 and rustc nightly 1.43.0 and I'm using python 3.7.5.

After that, I've tried with your 1.2.0 release (When there's a python subfolder) and building with maturin or using .so file it's working perfectly.

"cargo bench" fails.

How do I run the benchmarks?

cargo bench fails with:

~/raptorq (master) $ git log -n1
commit 02c80b595adb4478b8760430cf015ada48c1a1d6 (HEAD -> master, origin/master, origin/HEAD)
Author: Pavel <[email protected]>
Date:   Sun Oct 9 07:08:55 2022 +0300

    Added wasm build configuration (#136)
    
    Co-authored-by: Christopher Berner <[email protected]>

~/raptorq (master) $ cargo --version
cargo 1.65.0-nightly (646e9a0b9 2022-09-02)

~/raptorq (master) $ rustc --version
rustc 1.65.0-nightly (c2804e6ec 2022-09-07)

~/raptorq (master) $ cargo bench
   Compiling raptorq v1.7.0 (/home/fadedbee/raptorq)
error[E0432]: unresolved import `raptorq::generate_constraint_matrix`
 --> benches/matrix_sparsity.rs:1:5
  |
1 | use raptorq::generate_constraint_matrix;
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `generate_constraint_matrix` in the root

error[E0432]: unresolved import `raptorq::IntermediateSymbolDecoder`
 --> benches/matrix_sparsity.rs:2:5
  |
2 | use raptorq::IntermediateSymbolDecoder;
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `IntermediateSymbolDecoder` in the root

error[E0432]: unresolved import `raptorq::Octet`
 --> benches/matrix_sparsity.rs:3:5
  |
3 | use raptorq::Octet;
  |     ^^^^^^^^^-----
  |     |        |
  |     |        help: a similar name exists in the module (notice the capitalization): `octet`
  |     no `Octet` in the root

error[E0432]: unresolved import `raptorq::Symbol`
 --> benches/matrix_sparsity.rs:4:5
  |
4 | use raptorq::Symbol;
  |     ^^^^^^^^^------
  |     |        |
  |     |        help: a similar name exists in the module (notice the capitalization): `symbol`
  |     no `Symbol` in the root

error[E0432]: unresolved imports `raptorq::BinaryMatrix`, `raptorq::SparseBinaryMatrix`
 --> benches/matrix_sparsity.rs:5:46
  |
5 | use raptorq::{extended_source_block_symbols, BinaryMatrix, SparseBinaryMatrix};
  |                                              ^^^^^^^^^^^^  ^^^^^^^^^^^^^^^^^^ no `SparseBinaryMatrix` in the root
  |                                              |
  |                                              no `BinaryMatrix` in the root

error[E0432]: unresolved import `raptorq::Symbol`
  --> benches/codec_benchmark.rs:10:5
   |
10 | use raptorq::Symbol;
   |     ^^^^^^^^^------
   |     |        |
   |     |        help: a similar name exists in the module (notice the capitalization): `symbol`
   |     no `Symbol` in the root

error[E0432]: unresolved import `raptorq::Octet`
  --> benches/codec_benchmark.rs:11:46
   |
11 | use raptorq::{ObjectTransmissionInformation, Octet};
   |                                              ^^^^^
   |                                              |
   |                                              no `Octet` in the root
   |                                              help: a similar name exists in the module (notice the capitalization): `octet`

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
 --> benches/codec_benchmark.rs:3:16
  |
3 | use criterion::Benchmark;
  |                ^^^^^^^^^
  |
  = note: `#[warn(deprecated)]` on by default

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:29:9
   |
29 |         Benchmark::new("", move |b| {
   |         ^^^^^^^^^

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:43:9
   |
43 |         Benchmark::new("", move |b| {
   |         ^^^^^^^^^

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:58:9
   |
58 |         Benchmark::new("", move |b| {
   |         ^^^^^^^^^

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:78:9
   |
78 |         Benchmark::new("", move |b| {
   |         ^^^^^^^^^

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:91:9
   |
91 |         Benchmark::new("", move |b| {
   |         ^^^^^^^^^

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
   --> benches/codec_benchmark.rs:105:9
    |
105 |         Benchmark::new("", move |b| {
    |         ^^^^^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:27:7
   |
27 |     c.bench(
   |       ^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:41:7
   |
41 |     c.bench(
   |       ^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:56:7
   |
56 |     c.bench(
   |       ^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:76:7
   |
76 |     c.bench(
   |       ^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:89:7
   |
89 |     c.bench(
   |       ^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
   --> benches/codec_benchmark.rs:103:7
    |
103 |     c.bench(
    |       ^^^^^

For more information about this error, try `rustc --explain E0432`.
warning: `raptorq` (bench "codec_benchmark") generated 13 warnings
error: could not compile `raptorq` due to 2 previous errors; 13 warnings emitted
warning: build failed, waiting for other jobs to finish...
error: could not compile `raptorq` due to 5 previous errors

ArrayMap .keys() method could return an Iterator.

Currently the .keys() method on ArrayMap returns a vector.

With minor modifications, it can return an Iterator, saving allocations:

pub fn keys<'s>(&'s self) -> impl Iterator<Item = usize> + 's {
    self.elements
        .iter()
        .enumerate()
        .filter_map(move |(i, elem)| match elem {
            Some(_) => Some(i + self.offset),
            None => None,
        })
}

The only downside of this approach is that the Iterator borrows self for 's, while the Vec approach doesn't borrow once the end of the method is reached. However all your tests still pass using this Iterator approach.

bytes crate

Are there any plans to use bytes::Buf trait instead of &[u8] to reduce data copy?

Decode encoded symbols with pre-built plan

Hello Christoper,

Sorry for asking dumb questions again and again. How can I determine the decoder config if the data is encoded with pre-built plan encoder like in the encode_benchmark.rs? I didn't find the decoder constructor with a plan.

Thank you again,
PJ

Corruption if input is not a size multiple of the max_source_symbols

Sorry I can't share code (out of my control), but here's a description. Hopefully reproducible from my description. I'm working on getting the code approved for release.

Versions

  • raptor-code 1.0.5
  • rust: 1.66.0
  • Linux: Ubuntu 22.04.2 LTS

What I was doing

I took the example code from [here]:

let source_data: Vec<u8> = vec![1,2,3,4,5,6,7,8,9,10,11,12];
let max_source_symbols = 4;
let nb_repair = 3;

let mut encoder = raptor_code::SourceBlockEncoder::new(&source_data, max_source_symbols);
let n = encoder.nb_source_symbols() + nb_repair;

for esi in 0..n as u32 {
    let encoding_symbol = encoder.fountain(esi);
    //TODO transfer symbol over Network
    // network_push_pkt(encoding_symbol);
}

And I set the source_data to be 3684 bytes (a random example file), and max_source_symbols to 19 (chosen in order to get ~200 byte chunks, which is a requirement to me).

This produced a bunch of 194 byte chunks.

When decoding (I took example from the same place):

let encoding_symbol_length = 194;
let source_block_size = 19; // Number of source symbols in the source block
let mut n = 0u32;
let mut decoder = raptor_code::SourceBlockDecoder::new(source_block_size);

while decoder.fully_specified() == false {
    //TODO replace the following line with pkt received from network
    let (encoding_symbol, esi) = (vec![0; encoding_symbol_length],n);
    decoder.push_encoding_symbol(&encoding_symbol, esi);
    n += 1;
}

let source_block_size = encoding_symbol_length  * source_block_size;
let source_block = decoder.decode(source_block_size as usize);

I set encoding_symbol_length to 194 and source_block_size = 19 (per above), and ran it. It almost works perfectly. First of all of course the file is two bytes too big. I expected this, since 19*194 is 3686. I expected two null bytes at the end, though, which would be truncatable. But what actually happens is that the two extra null bytes are one each at the end of the last two blocks. That is, one at position 3686, and one at position 3492 (194 bytes earlier).

This seems like a bug to me. Surely giving non-multiple input should still produce the correct output?

Workaround

I successfully worked around this by padding the input itself to 3686 bytes. Not sure if it needs to be a multiple of 194 or 19. After doing that a simple truncation to 3684 produces perfect output.

Panic With Large Datasets

Thanks for making this awesome library! I am seeing a panic when I use it with large data sets on both Windows and Linux. If you modify line 7 on raptorq/examples/main.rs to be let mut data: Vec<u8> = vec![0; 100_000_000]; and then run it using cargo run --release --example main it will panic with the error below. Do you know of a workaround for this error?

cargo run --release --example main
    Finished release [optimized + debuginfo] target(s) in 0.08s
     Running `target\release\examples\main.exe`
thread 'main' panicked at 'assertion failed: `(left == right)`
  left: `0`,
 right: `1`', src\decoder.rs:230:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
error: process didn't exit successfully: `target\release\examples\main.exe` (exit code: 101)

Better pipelining?

Currently, Encoder has to create ALL the SourceBlockEncoders, and finish converting all the data in all the blocks into symbols before you can send out even the first packet. As the data being transmitted becomes larger and larger this becomes a bigger and bigger stall during which no bandwidth can be used.

On the other hand, all that is really needed is that the first block be fully transformed into symbols to start sending the first block of systematic data. Heck, if you really wanted to pipeline things, then the systematic data for a block without subblocks can probably be constructed while the requested repair packets are being computed in the background.

Some form of lazy initialization would be useful here.

This is actually a place where the current API is limiting in that to do it myself using the existing SourceBlockEncoders, not quite enough is exposed to construct the source blocks, as the partition function and logic used to figure out block sizes and what not are unexported.

One option would be to factor apart Encoder into two parts: one that figures out the plan of which blocks with what ranges of the source data are needed, which can be done with just a size, and no data required, and one that instantiates that plan either eagerly like now, or lazily as you first touch each block.

Rust API guidelines

Are you open to implementing the recommendations of the Rust API guidelines? I noticed that you're rolling your own serialization rather than using serde, and there are a number of standard traits that aren't implemented on your types (which makes it impossible for others to implement those traits due to Rust's orphan rules)

I'm willing to start doing PRs to implement some of these (e.g. serde, Clone, etc.), but wanted to know if you're open to them.

Benchmarks not compiling with rustc 1.53.0 (53cb7b09b 2021-06-17)

When trying to build the benchmarks with cargo build --benches we get the following errors

error[E0432]: unresolved import `raptorq::generate_constraint_matrix`
 --> benches/matrix_sparsity.rs:1:5
  |
1 | use raptorq::generate_constraint_matrix;
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `generate_constraint_matrix` in the root

error[E0432]: unresolved import `raptorq::IntermediateSymbolDecoder`
 --> benches/matrix_sparsity.rs:2:5
  |
2 | use raptorq::IntermediateSymbolDecoder;
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `IntermediateSymbolDecoder` in the root

error[E0432]: unresolved import `raptorq::Octet`
 --> benches/matrix_sparsity.rs:3:5
  |
3 | use raptorq::Octet;
  |     ^^^^^^^^^-----
  |     |        |
  |     |        help: a similar name exists in the module (notice the capitalization): `octet`
  |     no `Octet` in the root

error[E0432]: unresolved import `raptorq::Symbol`
 --> benches/matrix_sparsity.rs:4:5
  |
4 | use raptorq::Symbol;
  |     ^^^^^^^^^------
  |     |        |
  |     |        help: a similar name exists in the module (notice the capitalization): `symbol`
  |     no `Symbol` in the root

error[E0432]: unresolved imports `raptorq::extended_source_block_symbols`, `raptorq::BinaryMatrix`, `raptorq::SparseBinaryMatrix`
 --> benches/matrix_sparsity.rs:5:15
  |
5 | use raptorq::{extended_source_block_symbols, BinaryMatrix, SparseBinaryMatrix};
  |               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  ^^^^^^^^^^^^  ^^^^^^^^^^^^^^^^^^ no `SparseBinaryMatrix` in the root
  |               |                              |
  |               |                              no `BinaryMatrix` in the root
  |               no `extended_source_block_symbols` in the root

error[E0432]: unresolved import `raptorq::Symbol`
  --> benches/codec_benchmark.rs:10:5
   |
10 | use raptorq::Symbol;
   |     ^^^^^^^^^------
   |     |        |
   |     |        help: a similar name exists in the module (notice the capitalization): `symbol`
   |     no `Symbol` in the root

error[E0432]: unresolved import `raptorq::Octet`
  --> benches/codec_benchmark.rs:11:46
   |
11 | use raptorq::{ObjectTransmissionInformation, Octet};
   |                                              ^^^^^
   |                                              |
   |                                              no `Octet` in the root
   |                                              help: a similar name exists in the module (notice the capitalization): `octet`

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
 --> benches/codec_benchmark.rs:3:5
  |
3 | use criterion::Benchmark;
  |     ^^^^^^^^^^^^^^^^^^^^
  |
  = note: `#[warn(deprecated)]` on by default

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:29:9
   |
29 |         Benchmark::new("", move |b| {
   |         ^^^^^^^^^

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:43:9
   |
43 |         Benchmark::new("", move |b| {
   |         ^^^^^^^^^

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:58:9
   |
58 |         Benchmark::new("", move |b| {
   |         ^^^^^^^^^

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:78:9
   |
78 |         Benchmark::new("", move |b| {
   |         ^^^^^^^^^

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:91:9
   |
91 |         Benchmark::new("", move |b| {
   |         ^^^^^^^^^

warning: use of deprecated struct `criterion::Benchmark`: Please use BenchmarkGroups instead.
   --> benches/codec_benchmark.rs:105:9
    |
105 |         Benchmark::new("", move |b| {
    |         ^^^^^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:27:7
   |
27 |     c.bench(
   |       ^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:41:7
   |
41 |     c.bench(
   |       ^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:56:7
   |
56 |     c.bench(
   |       ^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:76:7
   |
76 |     c.bench(
   |       ^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
  --> benches/codec_benchmark.rs:89:7
   |
89 |     c.bench(
   |       ^^^^^

warning: use of deprecated associated function `criterion::Criterion::<M>::bench`: Please use BenchmarkGroups instead.
   --> benches/codec_benchmark.rs:103:7
    |
103 |     c.bench(
    |       ^^^^^

error: aborting due to 5 previous errors

For more information about this error, try `rustc --explain E0432`.
error: could not compile `raptorq`

To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: aborting due to 2 previous errors; 13 warnings emitted

For more information about this error, try `rustc --explain E0432`.
error: build failed

[question] get_encoded_packets

calling encoder.get_encoded_packets(overhead_packets_per_block) will generate packets with specified overhead packets, how one can get how many blocks were generated?

Division by zero

If I pass a number lower than 64 to Encoder::with_defaults(), then a division by 0 occurs:

diff --git a/examples/main.rs b/examples/main.rs
index 8d5a4c7..75d2d83 100644
--- a/examples/main.rs
+++ b/examples/main.rs
@@ -16,7 +16,7 @@ fn main() {
     }
 
     // Create the Encoder, with an MTU of 1400 (common for Ethernet)
-    let encoder = Encoder::with_defaults(&data, 1400);
+    let encoder = Encoder::with_defaults(&data, 32);
 
     // Perform the encoding, and serialize to Vec<u8> for transmission
     let mut packets: Vec<Vec<u8>> = encoder
$ cargo run --example main
   Compiling raptorq v1.7.0 (/home/rom/clone/raptorq)
    Finished dev [unoptimized + debuginfo] target(s) in 0.73s
     Running `target/debug/examples/main`
thread 'main' panicked at 'attempt to calculate the remainder with a divisor of zero', src/util.rs:41:8
stack backtrace:
   0: rust_begin_unwind
             at /rustc/9eb3afe9ebe9c7d2b84b71002d44f4a0edac95e0/library/std/src/panicking.rs:575:5
   1: core::panicking::panic_fmt
             at /rustc/9eb3afe9ebe9c7d2b84b71002d44f4a0edac95e0/library/core/src/panicking.rs:64:14
   2: core::panicking::panic
             at /rustc/9eb3afe9ebe9c7d2b84b71002d44f4a0edac95e0/library/core/src/panicking.rs:114:5
   3: raptorq::util::int_div_ceil
             at ./src/util.rs:41:8
   4: raptorq::base::ObjectTransmissionInformation::generate_encoding_parameters::{{closure}}
             at ./src/base.rs:216:25
   5: raptorq::base::ObjectTransmissionInformation::generate_encoding_parameters
             at ./src/base.rs:224:57
   6: raptorq::base::ObjectTransmissionInformation::with_defaults
             at ./src/base.rs:247:9
   7: raptorq::encoder::Encoder::with_defaults
             at ./src/encoder.rs:144:22
   8: main::main
             at ./examples/main.rs:19:19
   9: core::ops::function::FnOnce::call_once
             at /rustc/9eb3afe9ebe9c7d2b84b71002d44f4a0edac95e0/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

A variable n_max is assigned from the integer division 32 / (8 * 8), which is 0:

let n_max = symbol_size as u32 / (sub_symbol_size * alignment) as u32;

This n_max is passed as parameter to a closure here:

let num_source_blocks = int_div_ceil(kt as u64, kl(n_max) as u64);

The closure calls int_div_ceil(symbol_size as u64, alignment as u64 * n as u64), so the second parameter is 0:

let x = int_div_ceil(symbol_size as u64, alignment as u64 * n as u64);

I let you decide what to do to fix the problem (maybe force n_max to be at least 1?).

From a separate project with basically the same code as the example.rs but with Encoder::with_defaults(&data, an_integer_lower_than_64), I get an unreachable code error instead:

$ cargo run
thread 'main' panicked at 'internal error: entered unreachable code', /home/rom/.cargo/registry/src/github.com-1ecc6299db9ec823/raptorq-1.7.0/src/base.rs:205:13
stack backtrace:
   0: rust_begin_unwind
             at /rustc/9eb3afe9ebe9c7d2b84b71002d44f4a0edac95e0/library/std/src/panicking.rs:575:5
   1: core::panicking::panic_fmt
             at /rustc/9eb3afe9ebe9c7d2b84b71002d44f4a0edac95e0/library/core/src/panicking.rs:64:14
   2: core::panicking::panic
             at /rustc/9eb3afe9ebe9c7d2b84b71002d44f4a0edac95e0/library/core/src/panicking.rs:114:5
   3: raptorq::base::ObjectTransmissionInformation::generate_encoding_parameters::{{closure}}
             at /home/rom/.cargo/registry/src/github.com-1ecc6299db9ec823/raptorq-1.7.0/src/base.rs:205:13
   4: raptorq::base::ObjectTransmissionInformation::generate_encoding_parameters
             at /home/rom/.cargo/registry/src/github.com-1ecc6299db9ec823/raptorq-1.7.0/src/base.rs:208:39
   5: raptorq::base::ObjectTransmissionInformation::with_defaults
             at /home/rom/.cargo/registry/src/github.com-1ecc6299db9ec823/raptorq-1.7.0/src/base.rs:231:9
   6: raptorq::encoder::Encoder::with_defaults
             at /home/rom/.cargo/registry/src/github.com-1ecc6299db9ec823/raptorq-1.7.0/src/encoder.rs:136:22
   7: raptorq_sample::main
             at ./src/main.rs:13:19
   8: core::ops::function::FnOnce::call_once
             at /rustc/9eb3afe9ebe9c7d2b84b71002d44f4a0edac95e0/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

I don't know why I can't reproduce this error by running the example from the raptorq project directly.

benchmark(symbol_size, 0.50) get panic while symbol count is set to 10

Hi Cberner,

benchmark(symbol_size, 0.50) get panic issue, the detailed panic info as below:

thread 'main' panicked at 'index out of bounds: the len is 16 but the index is 16', src\arraymap.rs:192:9
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

Do you have any clue on this ?

Support subblocks?

What would be needed to support subblocks?

I went to use this for a data transmission application, and ran afoul of the buried assert in the encoder, but as I have no idea what caused that assertion to be buried there, I'm somewhat leery of playing around with it without context.

Question from arraymap.rs

Hi Cberner, This is the best RQ implementation I've ever seen.
I'm learning your code these days, I was confused by the following code, Iโ€™d appreciate if you could help.

  1. file (arraymap.rs), (the line starting from line 215 below)
    pub fn with_capacity(start_key: usize, end_key: usize) -> U32VecMap {
    U32VecMap {
    offset: start_key,
    elements: vec![0; end_key],
    }
    }

In my understanding of your intention, the elements should be initialed by
vec![0;end_key-start_key]
2) file arraymap.rs, the line starting from line 234
pub fn swap(&mut self, key: usize, other_key: usize) {
self.elements.swap(key, other_key);
}

If the offset is not initialed from 0, would it be ok ?
Thanks in advance

Next release to crates.io?

Do you have a plan for when you're going to publish your next release to crates.io? I'm currently setting my cargo.toml to point to this repository's master branch, but that's kind of hacky...

assert!((symbols_required as u32) < MAX_SOURCE_SYMBOLS_PER_BLOCK);

In ObjectTransmissionInformation (base.rs:120)

Since

// See section 4.4.1.2. "These parameters MUST be set so that ceil(ceil(F/T)/Z) <= K'_max."

should this assert instead be less than or equal?

assert!((symbols_required as u32) <= MAX_SOURCE_SYMBOLS_PER_BLOCK);

Thanks--very nice work btw.

C-Bindings

Any chance of having C bindings for this excellent library? thanks

Information on the encoded packets

Hello, as the title mentions, I was wondering about which part of the encoded packets contain the source symbol indices.
More in details: I noticed that when Encoder and Decoder exchange packets, no information on the source symbols used to create the encoded packets are added (like the index of them). So I guess these information is added in the encoded packet, is it right? And if so, in which part of the encoded is it contained? Thanks.

Taking too long to encode

unencoded_packet_list size is 1020076 bytes ,
PAYLOAD_SIZE=1280bytes
let encoder = Encoder::with_defaults(unencoded_packet_list, (PAYLOAD_SIZE) as u16);;
Code gets stuck when i run this .

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.