Code Monkey home page Code Monkey logo

Comments (7)

paullouisageneau avatar paullouisageneau commented on July 26, 2024

This kind of issue typically means that multiple threads concurrently modify the std::shared_ptr object: even if reference counting is thread-safe, concurrent access to the same std::shared_ptr object can cause a data race.

Are you sure track can't be read and modified by different threads at the same time? I'm asking because there is no lock in prepare_track(), which looks like a synchronization issue since you have to assign track somewhere.

from libdatachannel.

vipcxj avatar vipcxj commented on July 26, 2024

After send shared_ptr to a async callback instead of this, the problem disappear. But the this is not rtc::Track but the object witch own the rtc::Track. I have send the shared_ptrrtc::Track to the async callback before. I still don't understand why this problem is happening, but it seems to be solved.

from libdatachannel.

vipcxj avatar vipcxj commented on July 26, 2024

By the way, I can make sure there is no race when calling prepare_track at least.

from libdatachannel.

vipcxj avatar vipcxj commented on July 26, 2024

The issue come back. I found C++20 coroutine lambdas with non-empty capture lists that may cause use-after-free errors. So I write a helper function to avoid it:

    template <typename F, typename ...Args>
    auto invoke_async_lambda(F f, Args ...args)
        -> decltype(f(args...))
    { co_return co_await f(args...); }

    template <typename F>
    auto fix_async_lambda(F f) {
        return [f](auto &&...args) {
            return invoke_async_lambda(f, std::forward<decltype(args)>(args)...);
        };
    }

But not work.

from libdatachannel.

vipcxj avatar vipcxj commented on July 26, 2024

I think I found the problem. It seems a bug of g++.

auto some_routine_accept_routine_callback(std::function(asio::awaitable<void>())) -> asio::awaitable<void>;

std::shared_ptr<Foo> foo = ...;
// assume foo.use_count() == n
co_await some_routine_accept_routine_callback([foo]() -> asio::awaitable<void> {
  return [](std::shared_ptr<Foo> foo) -> asio::awaitable<void> {
     ...
  }(foo);
});
// here foo.use_count() == n - 1

// workaround
auto t = [foo]() -> asio::awaitable<void> {
  return [](std::shared_ptr<Foo> foo) -> asio::awaitable<void> {
     ...
  }(foo);
};
co_await some_routine_accept_routine_callback(t);

I think something wrong happened with argument stack of coroutine method call.

I really admire myself for finding bugs that odd.

from libdatachannel.

paullouisageneau avatar paullouisageneau commented on July 26, 2024

I don't think this is a compiler bug. I lack familiarity with C++20, but I think that's how scopes are supposed to work with coroutines. C++ Core Guidelines explicitly says not to use capturing lambdas that are coroutines and mentions shared pointers:

Usage patterns that are correct with normal lambdas are hazardous with coroutine lambdas. The obvious pattern of capturing variables will result in accessing freed memory after the first suspension point, even for refcounted smart pointers and copyable types.

A lambda results in a closure object with storage, often on the stack, that will go out of scope at some point. When the closure object goes out of scope the captures will also go out of scope. Normal lambdas will have finished executing by this time so it is not a problem. Coroutine lambdas may resume from suspension after the closure object has destructed and at that point all captures will be use-after-free memory access.

from libdatachannel.

vipcxj avatar vipcxj commented on July 26, 2024

I think it really a serious bug. On the one hand the problem here is not use-after-free, but rather the even more outrageous reference counting being decremented once more, which causes all captured objects to be destructed once more. In fact my original code works fine on windows. And my code actually guarantees that the life cycle of the captured object is greater than the lambda function, so even if I don't do anything about it, there shouldn't be a bug. on the other hand, I've already handled it in my example, by passing in not a coroutine lambda function, but a normal function with capture lists, which nest with a coroutine lambda that without capture lists. Finally, just adding a temporary variable leads to a completely different result, which is really bad semantically.

from libdatachannel.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.