Code Monkey home page Code Monkey logo

libgo's Introduction

libgo

Build Status

libgo -- a coroutine library and a parallel Programming Library

Libgo is a stackful coroutine library for collaborative scheduling written in C++ 11, and it is also a powerful and easy-to-use parallel programming library.

Three platforms are currently supported:

Linux

MacOSX

Windows (Win7 or above,x86 or x64,complie with VS2015/2017)

Using libgo to write multi-threaded programs, it can be developed as fast and logical as golang and Erlang concurrent languages, and has the performance advantages of C++ native.It make it happen that one can serve God and Mammon.

Libgo has the following characteristics:

  • 1.Provide golang's General powerful protocol, write code based on coroutine, can write simple code in a synchronous manner, while achieving asynchronous performance.

  • 2.Supporting massive coroutines, creating 1 million coroutines requires only 4.5 GB of physical memory. (data from real test, in no deliberately compressed stack situation.)

  • 3.Supporting multi-threaded scheduling protocols, providing efficient load balancing strategy and synchronization mechanism, it is easy to write efficient multi-threaded programs.

  • 4.The number of scheduled threads supports dynamic scaling, and there is no head blocking caused by slow scheduling.

  • 5.Use hook technology to make synchronous third-party libraries of linking processes become asynchronous calls, which greatly improves their performance. There's no need to worry that some DB authorities don't provide asynchronous drivers, such as hiredis and mysqlclient, which are client drivers that can be used directly and can achieve performance comparable to that of asynchronous drivers.

  • 6.Both dynamic links and full static links are supported, which makes it easy to generate executable files using C++ 11 static links and deploy them to low-level Linux systems.

  • 7.Provide Channel, Co_mutex, Co_rwmutex, timer and other features to help users write programs more easily.

  • 8.Supports local variables (CLS) of the process, and completely covers all scenarios of TLS (read the tutorial code sample13_cls.cpp for details).

  • From user feedback in the past two years, many developers have a project with an asynchronous non-blocking model (probably based on epoll, libuv or ASIO network libraries) and then need access to DBs such as MySQL that do not provide asynchronous driver. Conventional connection pool and thread pool schemes are intensive in high concurrency scenarios (each connection have to correspond to a thread for Best performance. Thousands of instruction cycles of thread context switching are intensive and too many active threads will lead to a sharp decline performance in OS scheduling capacity, which is unacceptable to many develops.

  • In this situation, there is no need to reconstruct the existing code if we want to use libgo to solve the problem of blocking operation in non-blocking model. The new libgo 3.0 has created three special tools for this scenario, which can solve this problem without intrusion: multi-scheduler with isolated running environment and easy interaction (read the tutorial code sample1_go.cpp for details), libggo can instead of the traditional thread pool scheme. (read tutorial code sample10_co_pool.cpp and sample11_connection_pool.cpp for details)

  • ** tutorial directory contains many tutorial codes, including detailed instructions, so that develop can learn libgo library step by step. **

  • If you find any bugs, good suggestions, or use ambiguities, you can submit a issue or contact the author directly: Email: [email protected]

compile and use libgo :

  • Vcpkg:

If you have installed vcpkg, you can install it directly using vcpkg: $ vcpkg install libgo

  • Linux:

    1.Use cmake to compile and install:

      $ mkdir build
      $ cd build
      $ cmake ..
    

    $ make debug #Skip it if you don`t want a debuggable versions. $ sudo make uninstall $ sudo make install

    2.Dynamic link to glibc: (put libgo at the front of link list)

      g++ -std=c++11 test.cpp -llibgo -ldl [-lother_libs]
    

    3.Full static link: (put libgo at the front of link list)

      g++ -std=c++11 test.cpp -llibgo -Wl,--whole-archive -lstatic_hook -lc -lpthread -Wl,--no-whole-archive [-lother_libs] -static
    
  • Windows: (3.0 is compatible with windows, just use master branch directly!)

    0.When using GitHub to download code on windows, we must pay attention to the problem of newline characters. Please install git correctly (using default options) and use git clone to download source code. (Do not download compressed packages)

    1.Use CMake to build project.

      #For example vs2015(x64):
      $ cmake .. -G"Visual Studio 14 2015 Win64"
    
      #For example vs2015(x86):
      $ cmake .. -G"Visual Studio 14 2015"
    

    2.If you want to execute the test code, please link the boost library. And set BOOST_ROOT in the cmake parameter:

      	For example:
      	$ cmake .. -G"Visual Studio 14 2015 Win64" -DBOOST_ROOT="e:\\boost_1_69_0"
    

performance

Like golang, libgo implements a complete scheduler (users only need to create a coroutine without concern for the execution, suspension and resource recovery of the coroutine). Therefore, libgo is qualified to compare the performance of single-threaded with golang (It is not qualified to do performance comparison in different ability).

Test environment: 2018 13-inch MAC notebook (CPU minimum) Operating System: Mac OSX CPU: 2.3 GHz Intel Core i5 (4 Core 8 Threads) Test script: $test/golang/test.sh thread_number

Matters needing attention(WARNING):

TLS or non-reentrant library functions that depend on TLS implementation should be avoided as far as possible. If it is unavoidable to use, we should pay attention to stop accessing the TLS data generated before handover after the process handover.

There are several kinds of behaviors that may cause the process switching:

  • The user calls co_yield to actively give up the cpu span.
  • Competitive Cooperative Lock, Channel Reading and Writing.
  • System Call of Sleep Series.
  • System calls waiting for events to trigger, such as poll, select, epoll_wait.
  • DNS-related system calls (gethostbyname series).
  • Connect, accept, data read-write operations on blocking sockets.
  • Data Read-Write Operation on Pipe.

System Call List of Hook on Linux System:

	connect   
	read      
	readv     
	recv      
	recvfrom  
	recvmsg   
	write     
	writev    
	send      
	sendto    
	sendmsg   
	poll      
	__poll
	select    
	accept    
	sleep     
	usleep    
	nanosleep
	gethostbyname                                                               
	gethostbyname2                                                              
	gethostbyname_r                                                             
	gethostbyname2_r                                                            
	gethostbyaddr                                                               
	gethostbyaddr_r

The above system calls are all possible blocking system calls. The whole thread is no longer blocked in the process. During the blocking waiting period, the CPU can switch to other processes to execute.System calls executed in native threads by HOOK are 100% consistent with the behavior of the original system calls without any change.

	socket
	socketpair
	pipe
	pipe2
	close     
	__close
	fcntl     
	ioctl     
	getsockopt
	setsockopt
	dup       
	dup2      
	dup3      

The above system calls will not cause blocking, although they are also Hook, but will not completely change their behavior, only for tracking socket options and status.

System Call List of Hook on Windows System:

	ioctlsocket                                                                        
	WSAIoctl                                                                           
	select                                                                             
	connect                                                                            
	WSAConnect                                                                         
	accept                                                                             
	WSAAccept                                                                          
	WSARecv                                                                            
	recv                                                                               
	recvfrom                                                                           
	WSARecvFrom                                                                        
	WSARecvMsg                                                                         
	WSASend                                                                            
	send                                                                               
	sendto                                                                             
	WSASendTo                                                                          
	WSASendMsg

libgo's People

Contributors

administrator1 avatar dota17 avatar firechickrsx avatar higithubhi avatar hungmingwu avatar jxbdlut avatar krysme avatar kx12345 avatar mlkt avatar sarrow104 avatar soolo-ss avatar sunny-shu avatar xbased avatar xgdgsc avatar yyzybb537 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libgo's Issues

windows 下deque异常bug

编译环境:vs 2017
使用tutorial下的sample8_multithread.cpp 的demo编译
其中修改 第41行
for (int i = 0; i < 8; ++i)
to
for (int i = 0; i < 108; ++i)
出现deque 断言
_DEBUG_ERROR("deque iterator not dereferencable");

试过用线程安全队列解决,发现运行效率降了很多。

clang 支持

linux下用clang编译tutorial的时候报错:

/usr/include/libgo/channel.h:41: error: unknown type name 'nullptr_t'; did you mean 'std::nullptr_t'?
    Channel const& operator>>(nullptr_t ignore) const
                              ^

有计划支持clang吗

vs2015不能编译

我使用cmake 3.5.2使用默认配置生成vs2015 sln,编译出错,比较奇怪:
1>------ 已启动生成: 项目: libgo_static, 配置: Debug x64 ------
1> block_object.cpp
1>e:\projects\libgo-master\libgo\util.h(24): error C2447: “{”: 缺少函数标题(是否是老式的形式表?)
1>e:\projects\libgo-master\libgo\util.h(47): error C2065: “RefObject”: 未声明的标识符
1>e:\projects\libgo-master\libgo\util.h(47): error C2923: “std::is_base_of”: 对于参数“_Base”,“RefObject”不是有效的 模板 类型变量

RefObject定义地方就出错。。。

io_wait中的epoll_create_lock_作用?

epoll_owner_pid和epoll_fd都是thread_local的,这意味着每个调度线程在创建epoll fd时并不会相互冲突,那么epoll_create_lock的作用在什么地方?

libgo性能问题

你好,我在使用libgo做协程切换测试的时候使用性能分析工具分析了一下libgo的性能
发现如下现象:

函数名 调用数 已用非独占时间百分比 已用独占时间百分比 平均已用非独占时间 平均已用独占时间 模块名
<lambda_90f6debc7fcf34860066f531e8cc5f6b>::operator() 1 99.97 8.92 12,601.52 1,124.35 switch.t.exe

原因:
进入用户函数时只调用了一次Task::task_cb函数中的this->fn_()占用比却高达8.92,如果频繁地产生协程然后协程又快速结束这样岂不是会形成瓶颈

根据这样推测进行了测试,发现性能出现极大的下降,下降到只有原来的 1/20:
我测试的环境是vs2015,win7

//不停地创建协程,同时只有一个协程
void task_test()
{
++counter[0];
go task_test;
}
int main()
{
go task_test;
co_sched.RunLoop();
}

libgo_epoll_wait 函数疑问

    pollfd pfd;
    pfd.fd = epfd;
    pfd.events = POLLIN | POLLOUT;
    pfd.revents = 0;
    res = poll(&pfd, 1, timeout);
    if (res <= 0) return res;
    return epoll_wait(epfd, events, maxevents, 0);

在最后使用epoll_wait调用之前,使用poll的用处是什么?

sample5_asio.cpp 示例在WINDOWS上出现错误。

错误信息如下:
Run-Time Check Failure #0 - The value of ESP was not properly saved across a function call. This is usually a result of calling a function declared with one calling convention with a function pointer declared with a different calling convention.

stack:

Demo.exe!co::hook_ioctlsocket(unsigned int s, long cmd, unsigned long * argp) Line 21 C++
Demo.exe!co::SetNonblocking(unsigned int s, bool is_nonblocking) Line 69 C++
Demo.exe!co::read_mode_hook<unsigned int,unsigned int (__cdecl*)(unsigned int,sockaddr *,int *),sockaddr * &,int * &>(unsigned int (unsigned int, sockaddr *, int *) * fn, const char * fn_name, int flags, unsigned int s, sockaddr * & <args_0>, int * & <args_1>) Line 286 C++
Demo.exe!co::hook_accept(unsigned int s, sockaddr * addr, int * addrlen) Line 336 C++
Demo.exe!boost::asio::detail::socket_ops::call_accept(int * __formal, unsigned int s, sockaddr * addr, unsigned int * addrlen) Line 96 C++
Demo.exe!boost::asio::detail::socket_ops::accept(unsigned int s, sockaddr * addr, unsigned int * addrlen, boost::system::error_code & ec) Line 114 C++
Demo.exe!boost::asio::detail::socket_ops::sync_accept(unsigned int s, unsigned char state, sockaddr * addr, unsigned int * addrlen, boost::system::error_code & ec) Line 140 C++
Demo.exe!boost::asio::detail::win_iocp_socket_serviceboost::asio::ip::tcp::acceptboost::asio::basic_socket<boost::asio::ip::tcp,boost::asio::stream_socket_service<boost::asio::ip::tcp > >(boost::asio::detail::win_iocp_socket_serviceboost::asio::ip::tcp::implementation_type & impl, boost::asio::basic_socketboost::asio::ip::tcp,boost::asio::stream_socket_service<boost::asio::ip::tcp > & peer, boost::asio::ip::basic_endpointboost::asio::ip::tcp * peer_endpoint, boost::system::error_code & ec) Line 451 C++
Demo.exe!boost::asio::socket_acceptor_serviceboost::asio::ip::tcp::acceptboost::asio::ip::tcp,boost::asio::stream_socket_service<boost::asio::ip::tcp >(boost::asio::detail::win_iocp_socket_serviceboost::asio::ip::tcp::implementation_type & impl, boost::asio::basic_socketboost::asio::ip::tcp,boost::asio::stream_socket_service<boost::asio::ip::tcp > & peer, boost::asio::ip::basic_endpointboost::asio::ip::tcp * peer_endpoint, boost::system::error_code & ec, void * __formal) Line 266 C++
Demo.exe!boost::asio::basic_socket_acceptorboost::asio::ip::tcp,boost::asio::socket_acceptor_service<boost::asio::ip::tcp >::acceptboost::asio::ip::tcp,boost::asio::stream_socket_service<boost::asio::ip::tcp >(boost::asio::basic_socketboost::asio::ip::tcp,boost::asio::stream_socket_service<boost::asio::ip::tcp > & peer, void * __formal) Line 932 C++
Demo.exe!echo_server() Line 37 C++
[External Code]
Demo.exe!co::Task::Task_CB() Line 39 C++
Demo.exe!co::Task::() Line 85 C++
[External Code]
Demo.exe!co::FiberFunc(void * param) Line 30 C++
[External Code]
[Frames below may be incorrect and/or missing, no symbols loaded for kernel32.dll]

编译器:
VS2013
编译方式:
使用vs_proj默认生成工程,运行x86编译。自己再创建DEMO工程,加载simple5_asio.cpp,没有禁用HOOK

协程栈空间分配失败导致段错误

在使用ctx_ucontext/context.h的Context类构造代码:
` if (-1 == getcontext(&ctx_)) {
ThrowError(eCoErrorCode::ec_makecontext_failed);
return ;
}

        stack_ = (char*)StackAllocator::get_malloc_fn()(stack_size_);
        DebugPrint(dbg_task, "valloc stack. size=%u `ptr=%p",`
                stack_size_, stack_);

        ctx_.uc_stack.ss_sp = stack_;
        ctx_.uc_stack.ss_size = stack_size_;
        ctx_.uc_link = NULL;

        makecontext(&ctx_, (void(*)(void))&ucontext_func, 1, &fn_);

中,当StackAllocator分配空间失败,此时stack_为NULL,会导致makecontext时出现段错误。 我的机器内存是1G,运行,test/golang/下的libgo_test,遇到 go []{ test_switch(10000); };`
执行这个协程内容时,就会出段错误,最终发现是内存不足。

可以维护一下Mac分支吗?

cmake .. -DENABLE_BOOST_CONTEXT=ON -DBOOST_ROOT="/opt/local" -DDISABLE_HOOK=ON

之后make各种报错:
use of undeclared identifier 'getpagesize'(头文件没引)
-Werror(有未使用的函数也报错)
即使正确设置了BOOST_ROOT,链接的时候也找不到boost库

花了一个多小时才在xcode里编过了。。。

提供对于 fork 系统调用的支持

对于某些场景,fork的多进程模式相对 pthread 多线程还是有优势的(例如一个子进程崩溃对于进程的影响比较小),建议增加对于 fork 的支持。

对FileDescriptorCtx中使用std::mutex的理解

#if LIBGO_SINGLE_THREAD
    typedef LFLock mutex_t;
#else
    typedef std::mutex mutex_t;
#endif 

在FileDescriptorCtx中有这样一段代码,在多线程模式下使用std::mutex而不是使用LFLock来同步。这里采用std::mutex是否可以这样理解:如果访问FileDescriptorCtx的协程都运行在相同的线程中,由于协程对执行权限的让出是在一个完整的操作之后(例如执行完close,set_et_mode之后),因此它们的访问实际上不需要进行同步(执行完设置标志位即可),所以只有在来自不同线程的访问才需要互斥,所以采用std::mutex,如果采用LFLock,则会避免不必要的互斥(相同线程内协程的互斥)。

一点疑问

在linux/linux_glibc_hook.cpp 文件的 libgo_poll 函数中,下面的代码:

   io_sentry->timer_ = g_Scheduler.ExpireAt(
                    std::chrono::milliseconds(timeout),
                    [io_sentry]{
                    g_Scheduler.GetIoWait().IOBlockTriggered(io_sentry);
                    });

        // save io-sentry
        tk->io_sentry_ = io_sentry;

        // yield
        g_Scheduler.GetIoWait().CoSwitch();

        // clear task->io_sentry_ reference count
        tk->io_sentry_.reset();

        if (io_sentry->timer_) {
            g_Scheduler.CancelTimer(io_sentry->timer_);
            io_sentry->timer_.reset();
        }

如果多个协程同时工作,都访问后端网络服务,定时器的io_entry->timer_中设置的超时时间一样(代码中用的mutilmap), 如果一个协成的网络访问立即完成, 其他几个协程暂时没有完成。
立即完成网络访问的协程被唤醒,走到 g_Scheduler.CancelTimer(io_sentry->timer_); 这里是把mutlimap中key一样(也就是超时的时间)的所有定时器都删除吧,会不会导致其他几个协程的定时器也被删了?

hook 读写文件系列 api

请教下 大神, 感觉会阻塞线程的地方,除了网络读写 还有就是本地磁盘读写。 是否可以这样认为:把 读写文件系列系统api也 hook后,可以达成不使用co_wait,协程不会阻塞线程。

编译sample2_yield.cpp + 静态链接 + 执行

  • 把sample2_yield.cpp拷贝到libgo文件夹之外
  • 编译命令:g++ -std=c++11 -g -Wall sample2_yield.cpp -Ilibgo/libgo/ -Ilibgo/libgo/linux -llibgo -llibgo_main -lboost_coroutine -lboost_context -lboost_system -lboost_thread -lpthread -llibgo -llibgo_main -lboost_coroutine -lboost_context -lboost_system -lboost_thread -lpthread -static -static-libgcc -static-libstdc++
  • 动态链接可以正常执行
  • 不使用co_main而使用main,静态链接不会出现问题
  • 使用gdb调试,start之后直接抛出异常,没办法到main函数入口点

详细报告在:https://github.com/Clcanny/Notes/blob/master/libgo/TutorialOfLibgo/TutorialOfLibgo.md

存在对智能指针并发读写的bug

`

bool BlockObject::Wakeup()
{
...
tk->block_ = nullptr;
if (tk->block_timer_) { // block cancel timer必须在lock之外, 因为里面会lock
if (g_Scheduler.BlockCancelTimer(tk->block_timer_))
tk->DecrementRef();
tk->block_timer_.reset();
}
...
}
void BlockObject::CancelWait(Task* tk, uint32_t block_sequence, bool in_timer)
{
...
tk->block_ = nullptr;
if (!in_timer && tk->block_timer_) { // block cancel timer必须在lock之外, 因为里面会lock
g_Scheduler.BlockCancelTimer(tk->block_timer_);
tk->block_timer_.reset();
}
...
}

`
上述代码可能不是在同一个线程中执行的,在开启了偷协程的机制时,此时有场景存在并发,导致block_timer_指针指向内存释放后还被访问

共享栈相关问题

请问下为啥删除了共享栈相关的代码,在centos5.8下面跑测试代码,默认stack size(1M),总分配内存10G左右,一直崩溃,怀疑是thread stack overflow

Can't compile on CentOS with Boost 1.61.1

cmake .. -DENABLE_BOOST_CONTEXT=ON                                                                                   17-10-23 13:35
------------ Options -------------
  CMAKE_BUILD_TYPE: Debug
  BOOST_ROOT:
-- Boost version: 1.65.1
-- Found the following Boost libraries:
--   coroutine
--   context
--   thread
--   system
--   date_time
--   chrono
--   regex
  layer_context: boost.context
  use cares: no
  use safe signal: no
  single thread mode: no
  enable_debugger: no
  enable_hook: yes
  build_dynamic_lib: yes
----------------------------------
-------------- Env ---------------
  CMAKE_SOURCE_DIR: /home/xxx/codes/test_libgo
  CMAKE_BINARY_DIR: /home/xxx/codes/test_libgo
----------------------------------
------------ Cxx flags -------------
  CMAKE_CXX_FLAGS_Debug:
------------------------------------
-- Configuring done
-- Generating done
-- Build files have been written to: /home/xxx/codes/test_libgo
make                                                                                                                 17-10-23 13:35
Scanning dependencies of target libgo_dynamic
[  3%] Building CXX object libgo/CMakeFiles/libgo_dynamic.dir/libgo/config.cpp.o
In file included from /home/xxx/codes/test_libgo/libgo/libgo/ctx_boost_context/context.h:4:0,
                 from /home/xxx/codes/test_libgo/libgo/libgo/context.h:73,
                 from /home/xxx/codes/test_libgo/libgo/libgo/config.cpp:2:
/home/xxx/codes/test_libgo/libgo/libgo/ctx_boost_context/context_v2.h:65:21: error: 'execution_context' in namespace 'boost::context' does not name a type
             typedef ::boost::context::execution_context<void> context_t;

I see execution_context is not included by boost/context/all.hpp :

//          Copyright Oliver Kowalke 2016.
// Distributed under the Boost Software License, Version 1.0.
//    (See accompanying file LICENSE_1_0.txt or copy at
//          http://www.boost.org/LICENSE_1_0.txt)

#include <boost/context/continuation.hpp>
#include <boost/context/fixedsize_stack.hpp>
#include <boost/context/pooled_fixedsize_stack.hpp>
#include <boost/context/protected_fixedsize_stack.hpp>
#include <boost/context/segmented_stack.hpp>
#include <boost/context/stack_context.hpp>
#include <boost/context/stack_traits.hpp>

Seems execution_context is deprecated after boost 1.6.

libgo怎么样跟asio的服务器接合起来用?

crow 的底层是基于asio实现的,他采用的是异步模型,如果在handler中处理耗时较长就不能处理大拼发。如果跟libgo接合起来用,就非常厉害了。

来看下crow给的异步例子:

    CROW_ROUTE(app, "/logs/<int>")
    ([](const crow::request& /*req*/, crow::response& res, int after){
        CROW_LOG_INFO << "logs with last " << after;
        if (after < (int)msgs.size())
        {
            crow::json::wvalue x;
            for(int i = after; i < (int)msgs.size(); i ++)
                x["msgs"][i-after] = msgs[i];
            x["last"] = msgs.size();

            res.write(crow::json::dump(x));
            res.end();
        }
        else
        {
            vector<pair<crow::response*, decltype(chrono::steady_clock::now())>> filtered;
            for(auto p : ress)
            {
                if (p.first->is_alive() && chrono::steady_clock::now() - p.second < chrono::seconds(30))
                    filtered.push_back(p);
                else
                    p.first->end();
            }
            ress.swap(filtered);
            ress.push_back({&res, chrono::steady_clock::now()});
            CROW_LOG_DEBUG << &res << " stored " << ress.size();
        }
    });

想像中跟libgo接合起来是这样子的:

void foo_libgo_function(crow::response& res) {
      //一些耗时处理, 是同步的操作 ...
     //处理完后把crow的handler也回调
     res.end();
}

    CROW_ROUTE(app, "/logs/<int>")
    ([](const crow::request& /*req*/, crow::response& res, int after){
            go  foo_libgo_function; //大概是这样,语法不一定对
    }

现在crow作为服务器程序(asio作为服务器), 业务代码又用libgo来实现,怎样写代码使得libgo的循环跑起来?

co_sched.RunUntilNoTask();   //libgo怎么样调度协程比较合适?

初学者,恳请指教,万分感谢

vs2013 编译出现以下错误,应该是没有加载WinSock2的lib(Ws2_32.lib).如果在文档中说明就更好了。

Error 12 error LNK2019: unresolved external symbol ___WSAFDIsSet@8 referenced in function "int __cdecl co::connect_mode_hook<int (__cdecl*)(unsigned int,struct sockaddr const *,int),struct sockaddr const * &,int &>(int (_cdecl)(unsigned int,struct sockaddr const *,int),char const *,unsigned int,struct sockaddr const * &,int &)" (??$connect_mode_hook@P6AHIPBUsockaddr@@h@ZAAPBU1@AAH@co@@YAHP6AHIPBUsockaddr@@h@ZPBDIAAPBU1@AAH@Z) D:\libin\libgo\vs_proj\vs2013\Demo\libgo_d.lib(win_vc_hook.obj) Demo
Error 13 error LNK2019: unresolved external symbol __imp__ioctlsocket@12 referenced in function "bool __cdecl co::SetNonblocking(unsigned int,bool)" (?SetNonblocking@co@@YA_NI_N@Z) D:\libin\libgo\vs_proj\vs2013\Demo\libgo_d.lib(win_vc_hook.obj) Demo
Error 14 error LNK2019: unresolved external symbol __imp__getsockopt@20 referenced in function "bool __cdecl co::IsNonblocking(unsigned int)" (?IsNonblocking@co@@YA_NI@Z) D:\libin\libgo\vs_proj\vs2013\Demo\libgo_d.lib(win_vc_hook.obj) Demo
Error 15 error LNK2019: unresolved external symbol __imp__select@20 referenced in function "int __cdecl co::connect_mode_hook<int (_cdecl)(unsigned int,struct sockaddr const _,int),struct sockaddr const * &,int &>(int (_cdecl)(unsigned int,struct sockaddr const *,int),char const *,unsigned int,struct sockaddr const * &,int &)" (??$connect_mode_hook@P6AHIPBUsockaddr@@h@ZAAPBU1@AAH@co@@YAHP6AHIPBUsockaddr@@h@ZPBDIAAPBU1@AAH@Z) D:\libin\libgo\vs_proj\vs2013\Demo\libgo_d.lib(win_vc_hook.obj) Demo
Error 16 error LNK2019: unresolved external symbol __imp__setsockopt@20 referenced in function "int __cdecl co::hook_ioctlsocket(unsigned int,long,unsigned long *)" (?hook_ioctlsocket@co@@YAHIJPAK@Z) D:\libin\libgo\vs_proj\vs2013\Demo\libgo_d.lib(win_vc_hook.obj) Demo
Error 17 error LNK2019: unresolved external symbol __imp__WSASetLastError@4 referenced in function "int __cdecl co::hook_ioctlsocket(unsigned int,long,unsigned long *)" (?hook_ioctlsocket@co@@YAHIJPAK@Z) D:\libin\libgo\vs_proj\vs2013\Demo\libgo_d.lib(win_vc_hook.obj) Demo
Error 18 error LNK2019: unresolved external symbol __imp__WSAGetLastError@0 referenced in function "int __cdecl co::hook_ioctlsocket(unsigned int,long,unsigned long *)" (?hook_ioctlsocket@co@@YAHIJPAK@Z) D:\libin\libgo\vs_proj\vs2013\Demo\libgo_d.lib(win_vc_hook.obj) Demo

请教libgo栈大小问题

go中 一个goroutine初始内存4K栈大小, 并可以动态扩展。 libgo看介绍默认2M栈大小, 主要想知道某协程栈空间增长大于2M后会发生什么?

error: format '%d' expects argument of type 'int', but argument 5 has type 'co::TaskState'

libgo/src/processer.cpp:69:9: note: in expansion of macro 'DebugPrint'
DebugPrint(dbg_switch, "leave task(%s) state=%d", tk->DebugInfo(), tk->state_);
^
libgo/src/processer.cpp: In member function 'void co::Processer::CoYield(co::ThreadLocalInfo&)':
libgo/src/scheduler.h:21:68: error: format '%d' expects argument of type 'int', but argument 5 has type 'co::TaskState' [-Werror=format=]

uname -a
Linux 4.2.0-1-686-pae #1 SMP Debian 4.2.6-3 (2015-12-06) i686 GNU/Linux

gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/i586-linux-gnu/5/lto-wrapper
Target: i586-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Debian 5.3.1-6' --with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-5 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-libmpx --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-i386/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-i386 --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-i386 --with-arch-directory=i386 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc --enable-targets=all --enable-multiarch --with-arch-32=i586 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-checking=release --build=i586-linux-gnu --host=i586-linux-gnu --target=i586-linux-gnu
Thread model: posix
gcc version 5.3.1 20160114 (Debian 5.3.1-6)

请问新的线程是内部自动启动的吗?

新的线程是库内部根据需要自己启动的吗?还是需要调用方自己启动若干线程,然后再在每个线程里调用Run?

另外thread_pool.cpp里这段有没有潜在的内存泄漏?
如果协程里有异常抛出,delete elem; 这句是不是会被跳过?
还是异常抛到此处之前会被吃掉?用unique_ptr包住会不会好一点?

void ThreadPool::RunLoop()
{
    assert_std_thread_lib();
    for (;;)
    {
        TPElemBase *elem = get();
        if (!elem) continue;
        elem->Do();
        delete elem;
    }
}

BlockObject在协程外 带超时的等待 的一点疑问

bool BlockObject::CoBlockWaitTimed(MininumTimeDurationType timeo)
{
auto begin = std::chrono::steady_clock::now();
if (!g_Scheduler.IsCoroutine()) {
while (!TryBlockWait() &&
std::chrono::duration_cast
(std::chrono::steady_clock::now() - begin) < timeo)
usleep(10 * 1000);
return false;
}
......
在协程外不是必然返回false?

测试ucorf的例子,发现多连接、多线程时会出现死锁

测试ucorf的例子,发现多连接、多线程、多协程同时调用单个stub时会出现死锁。
经过跟踪代码,发现问题出现在processer.cpp的uint32_t Processer::Run(uint32_t &done_count)方法中,66行处:
case TaskState::sys_block:
assert(tk->block_);
if (!tk->block_->AddWaitTask(tk)){
runnable_list_.push(tk);
... ...

应该在把任务tk加入runnable_list_前,加入一行tk->state_ = TaskState::runnable;或直接调用AddTaskRunnable(tk)

修改后为:
case TaskState::sys_block:
assert(tk->block_);
if (!tk->block_->AddWaitTask(tk)){
tk->state_ = TaskState::runnable;
runnable_list_.push(tk);
... ...

case TaskState::sys_block:
assert(tk->block_);
if (!tk->block_->AddWaitTask(tk)){
AddTaskRunnable(tk);
... ...
经过修改,测试ucorf的例子,多连接、多线程、多协程情况下不再死锁。
不知道是否这是libgo的一个BUG。

编译错误

使用 gcc5.3.0 编译时出现如下错误:
make
Scanning dependencies of target libgo_dynamic
[ 3%] Building CXX object CMakeFiles/libgo_dynamic.dir/libgo/co_rwmutex.cpp.o
[ 6%] Building CXX object CMakeFiles/libgo_dynamic.dir/libgo/sleep_wait.cpp.o
[ 10%] Building CXX object CMakeFiles/libgo_dynamic.dir/libgo/task.cpp.o
[ 13%] Building CXX object CMakeFiles/libgo_dynamic.dir/libgo/block_object.cpp.o
[ 17%] Building CXX object CMakeFiles/libgo_dynamic.dir/libgo/timer.cpp.o
[ 20%] Building CXX object CMakeFiles/libgo_dynamic.dir/libgo/co_mutex.cpp.o
[ 24%] Building CXX object CMakeFiles/libgo_dynamic.dir/libgo/thread_pool.cpp.o
[ 27%] Building CXX object CMakeFiles/libgo_dynamic.dir/libgo/processer.cpp.o
In file included from /home/zsx/download/libgo/libgo/task.h:8:0,
from /home/zsx/download/libgo/libgo/processer.h:2,
from /home/zsx/download/libgo/libgo/processer.cpp:1:
/home/zsx/download/libgo/libgo/processer.cpp: In member function ‘void co::Processer::CoYield(co::ThreadLocalInfo&)’:
/home/zsx/download/libgo/libgo/config.h:55:107: error: format ‘%d’ expects argument of type ‘int’, but argument 6 has type ‘co::TaskState’ [-Werror=format=]
::co::codebug_GetCurrentProcessID(), ::co::codebug_GetCurrentThreadID(), ##VA_ARGS);
^
/home/zsx/download/libgo/libgo/processer.cpp:120:5: note: in expansion of macro ‘DebugPrint’
DebugPrint(dbg_yield, "yield task(%s) state=%d", tk->DebugInfo(), tk->state_);
^
cc1plus: all warnings being treated as errors
make[2]: *** [CMakeFiles/libgo_dynamic.dir/libgo/processer.cpp.o] Error 1
make[1]: *** [CMakeFiles/libgo_dynamic.dir/all] Error 2
make: *** [all] Error 2

循环定时器

代码中只找到了单次定时器, 如果用单次定时器封装的话每次执行都会变一次TimerId, 能否提供循环定时器?

如何处理协程中的异常?

因为需要,最近在使用fiber系列的函数处理协程,发现在协程中抛出的异常无法被上层捕获,如何处理这个问题呢?在协程入口包try,捕获异常后保存,并切换回上层后抛出是否可行?

跑协程测试发现的core

将业务部分全部剥离,长时间跑测试:在主线程中启100个协程,每个协程又会启5个协程进行内存申请设置释放(通过智能指针),同时又启动了一堆线程进行执行RunUntilNoTask;
很低概率core出如下栈(0xf3那个是因为gdb版本太低导致),感觉这个是libgo的一个bug:
`

#0 0x0000000000537c8b in raise ()
#1 0x00000000005d62d5 in abort ()
#2 0x00000000005d0a5f in __assert_fail_base ()
#3 0x00000000005d0adc in __assert_fail ()
#4 0x0000000000568d4e in push (this=Unhandled dwarf expression opcode 0xf3
) at /home/gaowei/libgo/libgo/ts_queue.h:203
#5 co::Processer::StealHalf (this=Unhandled dwarf expression opcode 0xf3
) at /home/gaowei/libgo/libgo/processer.cpp:116
#6 0x00000000005401ca in co::Scheduler::DoRunnable (this=0x95e280, allow_steal=Unhandled dwarf expression opcode 0xf3
) at /home/gaowei/libgo/libgo/scheduler.cpp:190
#7 0x000000000054209e in co::Scheduler::Run (this=0x95e280, flags=2147483647) at /home/gaowei/libgo/libgo/scheduler.cpp:100
#8 0x000000000054249d in co::Scheduler::RunUntilNoTask (this=0x95e280, loop_task_count=0) at /home/gaowei/libgo/libgo/scheduler.cpp:158
#9 0x0000000000425336 in Dahua::EFS::CCoroutineRunThread::threadProc (this=0x1b44fa0) at CoroutineRunThread.cpp:72
#10 0x00000000004eb10e in (anonymous namespace)::InternalThreadBody (pdat=) at Src/Infra3/Thread.cpp:102
#11 0x0000000000530464 in start_thread ()
#12 0x0000000000625a39 in clone ()
`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.