Code Monkey home page Code Monkey logo

rtp's Introduction

Crate moved

As of the 23rd of August 2022 this crate has been migrated to the webrtc-rs/webrtc monorepo.

rtp's People

Contributors

algesten avatar edmellum avatar harlanc avatar jaymell avatar k0nserv avatar lookback-hugotunius avatar melekes avatar metaclips avatar rainliu avatar zotho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rtp's Issues

Compilation issue on `mipsel-unknown-linux-musl` target

Hello,
I was recently compiling rtp crate for mipsel-unknown-linux-musl target and got compilation error about AtomicU64 not being defined.
As it turns out, this specific architecture does not support atomic operations on u64, so rust stdlib does not provide it.
Moreover during inspecton I noticed some synchronization problems with SequencerImpl::next_sequence_number. It is possible that
self.roll_over_count may accidentally be incremented by 2 when this method is called from multiple threads.
I will open a pull request which fixes that using simple Mutex,. It also fixes the compilation issue on mipsel target.

#[derive(Debug, Clone)]
struct SequencerImpl {
    sequence_number: Arc<AtomicU16>,
    roll_over_count: Arc<AtomicU64>,
}

impl Sequencer for SequencerImpl {
    /// NextSequenceNumber increment and returns a new sequence number for
    /// building RTP packets
    fn next_sequence_number(&self) -> u16 {
        let sequence_number = self.sequence_number.load(Ordering::SeqCst);
        if sequence_number == u16::MAX {
            self.roll_over_count.fetch_add(1, Ordering::SeqCst);
            self.sequence_number.store(0, Ordering::SeqCst);
            0
        } else {
            self.sequence_number
                .store(sequence_number + 1, Ordering::SeqCst);
            sequence_number + 1
        }
    }

    /// RollOverCount returns the amount of times the 16bit sequence number
    /// has wrapped
    fn roll_over_count(&self) -> u64 {
        self.roll_over_count.load(Ordering::SeqCst)
    }

    fn clone_to(&self) -> Box<dyn Sequencer + Send + Sync> {
        Box::new(self.clone())
    }
}

Benchmark RTP

benchmark RTP crate and compare its performance to Pion RTP or other implementations

Overflow on long-running TrackLocal

I'm finding that long-running video streams eventually panic with the following error due to the packetizer's timestamp attribute overflowing:

thread 'tokio-runtime-worker' panicked at 'attempt to add with overflow', registry/src/github.com-1ecc6299db9ec823/rtp-0.6.1/src/packetizer/mod.rs:138:9

which is this line of code in the packetizer:

self.timestamp += samples;

I wouldn't mind researching a better solution here if needed but am wondering if just using a Wrapping type to allow addition with overflow would allow rollover without breaking clients?

Are `Packet.header.timestamp` values raw or transformed?

I have a live media streaming server that currently works with RTMP, and I'm adding support for WebRTC to be interleaved within RTMP workflows (e.g. Webrtc input, generate HLS, then push out to a RTMP server). Thus I need to know the true pts of each h264 packet I receive in order to make sure I pass the correct timestamps down the workflows.

So my code currently is calling TrackRemote.read_rtp() to get the RTP packet, then passing it into a custom h264 handler that's based off of the h264_writer code. The only timestamp I can see getting is rtp_packet.header.timestamp, but the RTP spec (or docs I've read) says that the timestamps specified in RTP packets start from a random timestamp, and thus need to be adjusted based on an epoch specified in the rtcp session.

There's not much documentation so I'm having trouble determining the answer to this, but are the rtp_packet.header.timestamp values raw values from the RTP packet, or are they already adjusted based on the rtcp session offset?

If they are raw, what's the solution to getting a reliable PTS for each packet?

H264 packet in AVC mode needs to create AVCDecoderConfigurationRecord

When H264Packet is created with is_avc: true, all payloads that are depacketized are in the AVC format with the length prepending each nalu. This is correct except for SPS and PPS nalus, as AVC encoding expects these to be configured in an AVCDecoderConfigurationRecord.

Is this something you'd be open to a PR for to have that built into H264Packet?

The code I currently use for this is:

pub fn convert_to_avc_decoder_config_record(
    sps_records: &Vec<Bytes>,
    pps_records: &Vec<Bytes>,
) -> Option<Bytes> {
    if sps_records.is_empty() || pps_records.is_empty() {
        return None;
    }

    let mut bytes = BytesMut::new();
    bytes.put_u8(1); // version
    bytes.put_u8(sps_records[0][1]); // profile
    bytes.put_u8(sps_records[0][2]); // compatibility
    bytes.put_u8(sps_records[0][3]); // level
    bytes.put_u8(0xFC | 3); // reserved (6 bits), nalu length size - 1 (2 bits) ????
    bytes.put_u8(0xE0 | (sps_records.len() as u8)); // reserved (3 bits), num of SPS (5 bits)
    for sps in sps_records {
        bytes.put_u16(sps.len() as u16);
        bytes.extend_from_slice(&sps);
    }

    bytes.put_u8(pps_records.len() as u8);
    for pps in pps_records {
        bytes.put_u16(pps.len() as u16);
        bytes.extend_from_slice(&pps);
    }

    Some(bytes.freeze())
}

At first glance it seems like a path to implementing this is for the H264Packet type to store an Option<Bytes> field for both a cached sps and cached pps. When it caches both a sps and pps then that depacketize() call would return the AVCDecoderConfigurationRecord bytes. Calls to depacketize() calls before receiving the sps and pps records would always return an empty payload.

I'm not totally clear how this gets handled when new SPS and PPS records come down from RTP, especially since AVC based systems I'm working with (RTMP) seem to ignore later sequence headers, so I'm not sure if H264Packet needs to contain a collection of seen SPS and PPS headers, or only the first ones.

padding included in payload

impl Marshaller for Packet doesn't consider padding:

  • unmarshal shouldn't include the padding bytes in the new Packet's payload
  • marshal_size and marshal_to then also need to change to add in padding bytes

rtp/src/packet/mod.rs

Lines 33 to 52 in 54d1579

impl Marshaller for Packet {
/// Unmarshal parses the passed byte slice and stores the result in the Header this method is called upon
fn unmarshal(raw_packet: &Bytes) -> Result<Self, Error> {
let header = Header::unmarshal(raw_packet)?;
let payload = raw_packet.slice(header.marshal_size()..);
Ok(Packet { header, payload })
}
/// MarshalSize returns the size of the packet once marshaled.
fn marshal_size(&self) -> usize {
self.header.marshal_size() + self.payload.len()
}
/// MarshalTo serializes the packet and writes to the buffer.
fn marshal_to(&self, buf: &mut BytesMut) -> Result<usize, Error> {
let n = self.header.marshal_to(buf)?;
buf.put(&*self.payload);
Ok(n + self.payload.len())
}

Consider not using Anyhow for errors

Hi, first thanks for creating these libraries, having a good story for webrtc in Rust will be awesome.

We are in the process of using this library (rtp) in an application, however since you are using Anyhow where the error-type does not implemnt std::error::Error, it's difficult to use in application which does not itself use Anyhow.

Anyhow is awesome, but as far as I can understand it's first and foremost meant for applications. Libraries, which is meant to be used from other libraries or application should use something like thiserror (https://crates.io/crates/thiserror).

Anyhow mentions this in their README: https://github.com/dtolnay/anyhow#comparison-to-thiserror

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.