Code Monkey home page Code Monkey logo

sofa-pbrpc's Introduction

sofa-pbrpc

Build Status Join the chat at https://gitter.im/sofa-pbrpc/rpc Coverity Scan Build Status

A light-weight RPC implementation of Google's protobuf RPC framework.

Wiki: https://github.com/baidu/sofa-pbrpc/wiki

Features

  • High performace.
  • Easy to use. Refer to sample code in './sample'.
  • Supports sync call and async call. Refer to './sample/echo'.
  • Supports three level (service/method/request) timeout. Refer to './sample/timeout_sample'.
  • Supports transparent compression. Refer to './sample/compress_sample'.
  • Supports mock test. Refer to './sample/mock_sample'.
  • Supports network flow control.
  • Supports auto connecting and reconnecting.
  • Supports keep alive time of idle connections.
  • Supports statistics for profiling.
  • Supports multi-server load balance and fault tolerance.
  • Supports http protocol.
  • Provides web monitor.
  • Provides python client library.

Dependencies

This lib depends on boost-1.53.0 (only need header), protobuf-2.4.1, snappy and zlib:

ATTENTION: boost header is only needed when compiling the lib, but is not needed for user code.

Extrally, './unit-test' and './sample/mock_sample' also depends on gtest:

Build

  1. Modify the file './depends.mk' to specify depending libs.
    The necessary libs are boost, protobuf, snappy, and zlib.
  2. Run 'make' to build sofa-pbrpc.
    The default optimization level is 'O2'.
    To change it, modify the 'OPT' variable in file './Makefile'.
  3. Run 'make install' to install sofa-pbrpc.
    The default install directory is './output'.
    To change it, modify the 'PREFIX' variable in file './Makefile'.

For more details, please refer to the wiki Build Guide.

Sample

For sample code, please refer to './sample' and the wiki Quick Start.

Profiling

For Profiling feature, please refer to the wiki Profiling.

Performance

For performace details, please refer to the wiki Performance.

Implementation

For implementation details, please refer to the wiki and file doc/sofa-pbrpc-document.md.

Support

[email protected]

sofa-pbrpc's People

Contributors

00k avatar bluebore avatar cyshi avatar demiaowu avatar duanguoxue avatar dy2012 avatar gmd20 avatar koalademo avatar qinzuoyan avatar richardyao avatar teodor-pripoae avatar yan97ao avatar ye-tian-zero avatar yuandong1222 avatar yvxiang avatar zd-double avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sofa-pbrpc's Issues

能否支持http debug

比如 提供一个简易的http页面,可以填入Request需要的各个字段。然后可以请求后,网页展示出Response

连接建立失败

最新master代码

连接建立失败
libsofa_pbrpc ERROR src/sofa/pbrpc/rpc_byte_stream.h:285] on_connect(): connect error: 10.195.113.13:7701: The descriptor does not fit into the select call's fd_set

暂时没有复现 建个issue跟踪

改进ConnectionCount的获取,总是获取实时值

问题描述:原来的ConnectionCount是一个非精确值,是通过TimerMaintain定期更新的,由于TimerMaintain每100ms执行一次,所以ConnectionCount值不是实时的。而通过max_connection_count限制最大连接数又依赖于ConnectionCount。

解决办法:对stream增加OnClosed()回调函数,在close stream的时候从RpcServerImpl::_stream_set或者RpcClientImpl::_stream_map中立即移除;获取ConnectionCount时直接获取_stream_set.size()或者_stream_map.size()。

修复keep_alive_time误杀问题

问题描述:目前keep_alive_time功能实现依赖于stream的最后读写操作时间(last_rw_time),如果now - last_rw_time > keep_alive_time,则认为连接的空闲时间超过阈值,就会主动关闭stream。但是这样可能**产生误杀**:如果一个连接上正在处理一个请求,而这个请求的处理时间很长(大于keep_alive_time),并且在等待处理完成的这段时间中stream上没有发生读写,则stream也会被提前关闭,造成请求处理完成后无法response,这是不合理的。

解决办法:对RpcClientStream和RpcServerStream各自维护pending_process_count值,如果pending_process_count > 0,则stream不能被关闭。

编译失败

sofa-pbrpc,编译对g++的版本,有什么要求呢?我用g++ 4.8.2编译失败了,protubuf,用的是proto 3.0版本

g++ -O2 -pipe -W -Wall -fPIC -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -DHAVE_SNAPPY -Isrc -I/usr/include -I/home/kentpeng/sofa-pbrpc/thirty/protobuf/include -I/home/kentpeng/sofa-pbrpc/thirty/snappy/include -I/home/kentpeng/sofa-pbrpc/thirty/zlib/include -c -o src/sofa/pbrpc/boost_system_error_code.o src/sofa/pbrpc/boost_system_error_code.cc
src/sofa/pbrpc/boost_system_error_code.cc:440:1: internal compiler error: in function_and_variable_visibility, at ipa.c:815
} // namespace boost
^
Please submit a full bug report,
with preprocessed source if appropriate.
See file:///usr/share/doc/gcc-4.8/README.Bugs for instructions.
Preprocessed source stored into /tmp/ccpGeFhP.out file, please attach this to your bugreport.
make: *** [src/sofa/pbrpc/boost_system_error_code.o] Error 1

性能优化

40万qps在两年前处于领先地位,而今天已经被很多竞品超越。
希望拿出几天时间优化下性能,将极限吞吐提升到100万qps以上。

pb2.5无法编译(我删除了原来的*.pb.h *.pb.cc,用2.5版本重新编译了*.proto)

g++ -O2 -pipe -W -Wall -fPIC -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -DHAVE_SNAPPY -Isrc -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/include -c -o src/sofa/pbrpc/builtin_service.pb.o src/sofa/pbrpc/builtin_service.pb.cc

src/sofa/pbrpc/builtin_service.pb.cc: 在函数‘void sofa::pbrpc::builtin::protobuf_AddDesc_builtin_5fservice_2eproto()’中:
src/sofa/pbrpc/builtin_service.pb.cc:413:3: 错误: ‘protobuf_AddDesc_sofa_2fpbrpc_2frpc_5foption_2eproto’不是‘sofa::pbrpc’的成员

make: *** [src/sofa/pbrpc/builtin_service.pb.o] 错误 1
【我的环境:ubuntu12.04 gcc4.6.3 pb2.5】

Profiling 支持计划

功能

用户通过连编 gperftools 库,即可查看server的profiling数据,数据展示支持文本和图片两种方式,server端不需要安装graphviz

设计细节

Profiling数据展示

处理流程

class Profiling
{
public:
    // 两个servlet注册到WebService,用于展示Profiling数据
    bool CpuProfiling(HTTPRequest, HTTPResponse);
    bool MemoryProfiling(HTTPRequest, HTTPResponse);
};
  • 以CpuProfiling为例
    • 用户在浏览器中访问RPC的WebServer,路径为 /cpu,同时点击页面按钮开始进行Profiling
    • CpuProfiling中收到请求后,进行适当判断(参数检查,以及是否有正在进行的Profiling),然后开始Profiling,为了不阻塞IO线程,Profiling 中会增加一个线程用于进行profiling
    • Profiling线程函数中,启动Profiling,sleep指定时间后(可由用户指定,从浏览器参数中传入),退出Profiling,数据放入固定位置
    • 用户浏览器侧使用ajax循环检查数据是否ready
    • server端收到数据检查请求后,从后台拿到数据,返回用户浏览器
class Profiling
{
public:
    // 两个servlet注册到WebService,用于展示Profiling数据
    bool CpuProfiling(HTTPRequest, HTTPResponse);
    bool MemoryProfiling(HTTPRequest, HTTPResponse);
private:
    // profiling 线程,专门负责进行Profiling
    ThreadGroupImplPtr _profiling_thread_group;
};
  • 为了使server端不需要安装graphviz,需要在浏览器上执行 viz.js,server只需要生成dot文件即可

增加 viz.js.cc

static const std::string vizjs = "content of viz.js";

为了生成dot,增加 pprof.perl.cc

static const std::string pprof = "content of pprof";

数据生成

  • 为了方便用户使用(只连编,不改代码),Profiling中增加对应的弱符号
// file profiling.cc
extern "C"
{
    // cpu profiling需要的函数
    int __attribute__((weak)) ProfilerStart(const char* fname);
    void __attribute__((weak)) ProfilerStop();
}
  • 同时在common.h中,增加两个touch函数,和一个宏开关
// file common.h
#ifdef SOFA_PBRPC_PROFILING
#include <gperftools/profiler.h>
#endif

namespace sofa {
namespace pbrpc {

#ifdef SOFA_PBRPC_PROFILING
void touch_gperftools_profiler()
{
    SCHECK(false);
    ProfilerStart("function_never_run");
    ProfilerStop();
}

void touch_profiler()
{
    SCHECK(false);
    touch_gperftools_profiler();
}
#endif

}
}
  • Server 的Profiling线程,在当前运行目录下新建profiling文件夹及生成对应数据
  • Server 使用管道获取pprof.pl执行后生成的dot文件,返回浏览器

@qinzuoyan @bluebore 两位先看下

RPC时,对方挂掉,会导致调用者段错误

进行RPC调用时,如果对方进程挂掉,会导致调用者进程段错误

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff488f700 (LWP 20041)]
google::protobuf::MessageLite::SerializePartialToCodedStream (this=this@entry=0x7fffd00027d0, output=output@entry=0x7ffff488e330) at google/protobuf/message_lite.cc:235
235 const int size = ByteSize(); // Force size to be cached.
(gdb) bt
#0 google::protobuf::MessageLite::SerializePartialToCodedStream (this=this@entry=0x7fffd00027d0, output=output@entry=0x7ffff488e330) at google/protobuf/message_lite.cc:235
#1 0x00007ffff7b20ee5 in google::protobuf::MessageLite::SerializeToCodedStream (this=this@entry=0x7fffd00027d0, output=output@entry=0x7ffff488e330)

at google/protobuf/message_lite.cc:230

#2 0x00007ffff7b20f1f in google::protobuf::MessageLite::SerializeToZeroCopyStream (this=0x7fffd00027d0, output=) at google/protobuf/message_lite.cc:263
#3 0x000000000048b100 in sofa::pbrpc::RpcClientImpl::CallMethod(google::protobuf::Message const_, google::protobuf::Message_, sofa::pbrpc::shared_ptrsofa::pbrpc::RpcControllerImpl const&) ()
#4 0x00000000004a2fc9 in sofa::pbrpc::SimpleRpcChannelImpl::CallMethod(google::protobuf::MethodDescriptor const_, google::protobuf::RpcController_, google::protobuf::Message const_, google::protobuf::Message_, google::protobuf::Closure*) ()
#5 0x000000000043d342 in bfs::ChunkServer_Stub::WriteBlock (this=0x740070, controller=0x7fffc4000a20, request=0x7fffd00027d0, response=0x7fffc4000940, done=0x7fffc4000e90)

    at src/proto/chunkserver.pb.cc:2720

#6 0x000000000041d89a in bfs::RpcClient::AsyncRequest<bfs::ChunkServer_Stub, bfs::WriteBlockRequest, bfs::WriteBlockResponse, google::protobuf::Closure>(bfs::ChunkServer_Stub_, void (bfs::ChunkServer_Stub::)(google::protobuf::RpcController, bfs::WriteBlockRequest const_, bfs::WriteBlockResponse_, google::protobuf::Closure_), bfs::WriteBlockRequest const_, bfs::WriteBlockResponse_, boost::function<void (bfs::WriteBlockRequest const*, bfs::WriteBlockResponse*, bool, int)>, int, int) (this=0x742fc0, stub=0x740070,

        func=&virtual table offset 48, request=0x7fffd00027d0, response=0x7fffc4000940, callback=..., rpc_timeout=60, retry_times=1) at ./src/rpc/rpc_client.h:86

#7 0x0000000000412b7a in bfs::BfsFileImpl::DelayWriteChunk (this=0x74e740, buffer=0x73f860, request=0x7fffd00027d0, retry_times=0, cs_addr="Porsche:8020")

at src/sdk/bfs.cc:846

#8 0x000000000042f0e5 in boost::_mfi::mf4<void, bfs::BfsFileImpl, bfs::WriteBuffer*, bfs::WriteBlockRequest const*, int, std::string>::operator() (this=0x7fffc4000900,

        p=0x74e740, a1=0x73f860, a2=0x7fffd00027d0, a3=0, a4="Porsche:8020") at /usr/include/boost/bind/mem_fn_template.hpp:506

#9 0x000000000042e10c in boost::_bi::list5boost::_bi::value<bfs::BfsFileImpl*, boost::_bi::valuebfs::WriteBuffer*, boost::_bi::value<bfs::WriteBlockRequest const*>, boost::_bi::value, boost::_bi::valuestd::string >::operator()<boost::_mfi::mf4<void, bfs::BfsFileImpl, bfs::WriteBuffer*, bfs::WriteBlockRequest const*, int, std::string>, boost::_bi::list0> (this=0x7fffc4000910, f=..., a=...) at /usr/include/boost/bind/bind.hpp:525
#10 0x000000000042c4e0 in boost::_bi::bind_t<void, boost::_mfi::mf4<void, bfs::BfsFileImpl, bfs::WriteBuffer*, bfs::WriteBlockRequest const*, int, std::string>, boost::_bi::list5boost::_bi::value<bfs::BfsFileImpl*, boost::_bi::valuebfs::WriteBuffer*, boost::_bi::value<bfs::WriteBlockRequest const*>, boost::_bi::value, boost::_bi::valuestd::string > >::operator() (this=0x7fffc4000900) at /usr/include/boost/bind/bind_template.hpp:20
#11 0x0000000000429cca in boost::detail::function::void_function_obj_invoker0<boost::_bi::bind_t<void, boost::_mfi::mf4<void, bfs::BfsFileImpl, bfs::WriteBuffer*, bfs::WriteBlockRequest const*, int, std::string>, boost::_bi::list5boost::_bi::value<bfs::BfsFileImpl*, boost::_bi::valuebfs::WriteBuffer*, boost::_bi::value<bfs::WriteBlockRequest const*>, boost::_bi::value, boost::_bi::valuestd::string > >, void>::invoke (function_obj_ptr=...) at /usr/include/boost/function/function_template.hpp:153
#12 0x000000000041a6a5 in boost::function0::operator() (this=0x7ffff488ee40) at /usr/include/boost/function/function_template.hpp:767
#13 0x00000000004165de in common::ThreadPool::ThreadProc (this=0x738c80 bfs::g_thread_pool) at ./src/common/thread_pool.h:171
#14 0x00000000004162ce in common::ThreadPool::ThreadWrapper (arg=0x738c80 bfs::g_thread_pool) at ./src/common/thread_pool.h:145
#15 0x00007ffff76a50a5 in start_thread (arg=0x7ffff488f700) at pthread_create.c:309
#16 0x00007ffff6c93cfd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Compile error.

g++ -O2 -pipe -W -Wall -fPIC -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -DHAVE_SNAPPY -Isrc -I../boost_1_53_0 -I../protobuf-2.4.1/output/include -I/usr/local/include -I/include -c -o src/sofa/pbrpc/dynamic_rpc_channel_impl.o src/sofa/pbrpc/dynamic_rpc_channel_impl.cc
In file included from src/sofa/pbrpc/common_internal.h:19:0,
from src/sofa/pbrpc/dynamic_rpc_channel_impl.h:13,
from src/sofa/pbrpc/dynamic_rpc_channel_impl.cc:7:
src/sofa/pbrpc/ptime.h: 在函数‘std::string sofa::pbrpc::ptime_to_string(const PTime&)’中:
src/sofa/pbrpc/ptime.h:42:37: 警告: 格式 ‘%ld’ expects argument of type ‘long int’, but argument 10 has type ‘boost::date_time::time_duration<boost::posix_time::time_duration, boost::date_time::time_resolution_traits<boost::date_time::time_resolution_traits_adapted64_impl, (boost::date_time::time_resolutions)5u, 1000000, 6u> >::fractional_seconds_type {aka long long int}’ [-Wformat]
src/sofa/pbrpc/ptime.h:42:37: 警告: 格式 ‘%ld’ expects argument of type ‘long int’, but argument 10 has type ‘boost::date_time::time_duration<boost::posix_time::time_duration, boost::date_time::time_resolution_traits<boost::date_time::time_resolution_traits_adapted64_impl, (boost::date_time::time_resolutions)5u, 1000000, 6u> >::fractional_seconds_type {aka long long int}’ [-Wformat]
src/sofa/pbrpc/atomic.h: Assembler messages:
src/sofa/pbrpc/atomic.h:59: Error: invalid instruction suffix for `xadd'
make: *** [src/sofa/pbrpc/dynamic_rpc_channel_impl.o] 错误 1

OS:
Linux 3.2.0-59-generic-pae #90-Ubuntu SMP Tue Jan 7 23:07:06 UTC 2014 i686 i686 i386 GNU/Linux
gcc -v:
gcc 版本 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)

sofa-pbrpc支持http协议,在路由阶段有些不符合预期的地方

/a/b
/a/c
使用上述两个路径同时注册,后一个会注册失败
另外/a/* 可以匹配/a对应WebServlet这个实现需要斟酌一下,这样会导致以下问题
例如 我注册了一个 /a
同时我也注册了/a/b , /a/c 这个时候一切都符合预期
如果我没有注册 /a 这个时候注册 /a/b , /a/c 这个时候 第二个注册会失败
建议http路径解析不需要这样的容错逻辑,其下可能隐藏bug,即耗费性能,用户很少会去直接输url查询

sofa-pbrpc和bgcc

sofa-pbrpc和bgcc都是百度的出品,请问哪个在百度内部使用会多点?

resolve address failed的问题

问题现象

最近在线上环境一些依赖sofa-pbrpc的服务出现"resolve address failed"的问题,导致client和一些server不能正常通信。

问题原因

#37 中描述:RpcChannel在构造函数中调用具体实现channel的Init()函数对地址进行解析不能返回Init的结果,当地址解析失败会设置_resolve_address_succeed的状态,在之后每次RpcChannel的CallMethod都会检查_resolve_address_succeed状态,如果为false则返回用户失败并设置RPC_ERROR_RESOLVE_ADDRESS的错误码。

解决方案

  1. 应用程序在rpc调用CallMethod失败后(simple_rpc_channel_impl.cc 139行),根据RPC_ERROR_RESOLVE_ADDRESS的错误码,重新创建channel,对地址重新解析。
    优点:不用修改rpc的接口和实现,用户自己控制地址解析失败的处理逻辑。
    缺点:当前一些服务需要增加针对RPC_ERROR_RESOLVE_ADDRESS错误的处理逻辑对地址重新解析。
  2. 在RpcChannel中增加类似IsValid()的接口返回channel是否Init成功,用户创建完channel调用,根据结果再进行相关操作。
    优点:如果地址解析失败,在channel创建后就可以判断到,进而重新创建channel。
    缺点:需要依赖rpc的服务在创建channel后调用新增接口进行判断并处理。
  3. 在RpcChannel的CallMethod中判断地址解析失败的状态后(simple_rpc_channel_impl.cc 131行),重新解析一遍地址。
    优点:当前依赖rpc的服务不需要升级。
    缺点:地址解析是个同步过程,在CallMethod中调用地址解析可能阻塞用户线程。没有让用户选择处理方式。

@qinzuoyan @cyshi 左言和承毅对这个问题有什么方案吗?

增加Cancel支持

背景

这里说的Cancel是指Client可以主动取消request的执行,包括在request还没有通过网络发送时取消发送,或者将Cancel命令通过网络传递到server端以提前终止服务的执行(在server端的服务实现支持Cancel功能的情况下),其目的都是避免不必要的资源消耗,无论是网络资源还是计算资源。

protobuf的RPC框架中定义了Cancel功能的接口和语义,具体在RpcController类中进行了定义:

  // Client-side methods ---------------------------------------------

  // Advises the RPC system that the caller desires that the RPC call be
  // canceled.  The RPC system may cancel it immediately, may wait awhile and
  // then cancel it, or may not even cancel the call at all.  If the call is
  // canceled, the "done" callback will still be called and the RpcController
  // will indicate that the call failed at that time.
  virtual void StartCancel() = 0;

  // Server-side methods ---------------------------------------------

  // If true, indicates that the client canceled the RPC, so the server may
  // as well give up on replying to it.  The server should still call the
  // final "done" callback.
  virtual bool IsCanceled() const = 0;

  // Asks that the given callback be called when the RPC is canceled.  The
  // callback will always be called exactly once.  If the RPC completes without
  // being canceled, the callback will be called after completion.  If the RPC
  // has already been canceled when NotifyOnCancel() is called, the callback
  // will be called immediately.
  //
  // NotifyOnCancel() must be called no more than once per request.
  virtual void NotifyOnCancel(Closure* callback) = 0;

考虑server端用户是如何使用cancel功能的:在用户实现的服务处理函数开始时,通过RpcController::NotifyOnCancel()注册一个Closure回调函数,回调函数的处理逻辑是提前终止服务处理,避免不必要的资源消耗。

实现

考虑的一种实现方案是:在client端调用RpcController::StartCancel的时候,发送一个special的cancel_request到server端,server收到cancel_request后,找到这个request对应的RpcController,然后调用服务实现者通过RpcController::NotifyOnCancel()注册的回调函数。

sofa-pbrpc cookie功能设计

背景

在rpc的应用场景中,用户需要传输非protobuf协议封装的数据,数据的格式和操作由用户定义,这些数据最终都以字节流的方式在client和server之间传输。需要在sofa-pbrpc内部实现一套attachment机制,支持用户自定义附加数据的格式和操作,用户在client端设置附加数据,server端接受并进行操作。基于该机制为用户提供默认的Cookie插件,用于存储和传输logid等client端的状态信息。

功能

实现sofa-pbrpc attachment机制,支持用户附加数据,附加数据的定义和操作在用户层实现。并基于此机制实现cookie插件

接口设计

  • 在sofa-pbrpc中实现RpcAttachment的基类,包括Serialize和Deserialize的接口,用户层实现插件继承RpcAttachment并在此基础上实现增删改查的操作,用户在client中通过RpcController的set_request_attachment的接口设置附加数据,并调用Serialize接口将用户附加数据进行序列化,Append在request_buffer之后进行发送。
  • 在RpcServer端对收到的请求数据反序列化,在请求数据的末尾获取用户的附加数据,通过controller传输给用户。用户调用RpcController::
    GetRequestAttachment(RpcAttachment& attach), 利用attach对象的Deserialize接口将附加数据反序列化。Server端可以对attachment数据进行增删改查

Attachment基类:

class RpcAttachment
{
public:
    virtual ~RpcAttachment() { }
    virtual bool Serialize(ReadBufferPtr attactment_buffer) = 0;
    virtual bool Deserialize (ReadBufferPtr attachment_buffer) = 0;
};

RpcController增加接口:

RpcAttachmentPtr_request_attachment; //RPC请求的附加数据buffer
RpcAttachmentPtr _response_attachment; //RPC应答的附加数据buffer
ReadBufferPtr _request_attach_buffer; //RPC请求的附加数据buffer
ReadBufferPtr _response_attach_buffer; //RPC应答的附加数据buffer
//用户接口
//设置请求的附加数据
void SetRequestAttachment(RpcAttachment* request_attachment);
//获取请求的附加数据
void GetRequestAttachment(RpcAttachment* request_attachment);
//设置应答的附加数据
void SetResponseAttachment(RpcAttachment* response_attachment);
//获取应答的附加数据
void GetResponseAttachment(RpcAttachment* response_attachment);
//Rpc内部接口
//设置request_attachment_buffer
void SetRequestAttachmentBuffer(const ReadBufferPtr& attachment_buffer);
//获取request_attachment_buffer
const ReadBufferPtr& GetRequestBuffer() const;
//设置response_attachment_buffer
void SetResponseAttachmentBuffer(const ReadBufferPtr& attachment_buffer);
////获取response_attachment_buffer
const ReadBufferPtr& GetResponseBuffer() const;

使用方法

Cookie插件:

Class RpcCookie : public sofa::pbrpc:: RpcAttachment
{
    virtual bool Serialize(ReadBufferPtr& attactment_buffer) ;
    virtual bool Deserialize (const ReadBufferPtr& attachment_buffer);
    void Get(const std::string key, std::string& value);
    void Set(const std::string& key, const std::string value); 
    void Erase(const std::string& key);
    void Clear();
}

Client端示例:

    sofa::pbrpc::RpcController* cntl = new sofa::pbrpc::RpcController();
    cntl->SetTimeout(3000);
    CookiePtr cookie(new Cookie());
    cookie->set("type", "sync");
    cntl->SetRequestAttachment(cookie.get());
    sofa::pbrpc::test::EchoRequest* request = new sofa::pbrpc::test::EchoRequest();
    request->set_message("Hello from client");
    sofa::pbrpc::test::EchoResponse* response = new sofa::pbrpc::test::EchoResponse();
    sofa::pbrpc::test::EchoServer_Stub* stub =new sofa::pbrpc::test::EchoServer_Stub(&rpc_channel);
    stub->Echo(cntl, request, response, NULL);
    if (cntl->Failed())
    {
        SLOG(ERROR, "request failed: %s", cntl->ErrorText().c_str());
    }
    else
    {
        cookie.reset(new Cookie());
        cntl->GetResponseAttachment(cookie.get());
        std::string version;
        cookie->Get(“version”, version);
        SLOG(NOTICE, "request succeed: %s, version: %s", response->message().c_str(), version.c_str());
    }

Server端示例:

   virtual void Echo(google::protobuf::RpcController* controller,
                      const sofa::pbrpc::test::EchoRequest* request,
                      sofa::pbrpc::test::EchoResponse* response,
                      google::protobuf::Closure* done)
    {
        sofa::pbrpc::RpcController* cntl = static_cast<sofa::pbrpc::RpcController*>(controller);
        SLOG(INFO, "Echo(): request message from %s: %s",
                cntl->RemoteAddress().c_str(), request->message().c_str());
        CookiePtr cookie(new Cookie());
        cntl->GetRequestAttachment(cookie.get());
        cookie->Set(“version”, “1.00”);
        cntl->SetResponseAttachment(cookie.get());
        response->set_message("echo message: " + request->message());
        done->Run();
    }

增加Server Timeout支持,以避免不必要的response造成带宽占用

问题描述

当RequestTimeout之后,实际上对应Socket没有关闭,超时后数据还是会被接收,这在传输大数据块时,会占用较大带宽。

原因分析

这个问题存在的原因在于:

  • Socket连接是会持续保持的,只要不出现传输上的错误或者keep_alive_time超时,socket是不会被关闭的。这样的好处是socket可以最大限度重用,避免重新建立连接的代价;另外socket是复用的,即不同的request在目标地址相同的情况下会共用一个socket通道,如果因为一个request超时就关闭socket是不合理的。
  • 请求超时这不是传输上的错误,因此socket不会关闭,此时关键问题是server端并不知道client端已经超时了,所以依然会正常处理request并把结果发送给client端,造成带宽占用。

解决方案

考虑如下:

  • 在client发送request给server端时,携带一个server_timeout,在server端收到request时记录receive_time,在请求处理完成后通过(current_time - receive_time)计算实际处理时间process_time,如果process_time > server_timeout,则可以推断此时client已经超时了,就不用发response消息了
  • server_timeout是小于client_timeout的,可以通过client_timeout - (client_send_time - client_begin_time)计算得到
  • 可以进一步优化,在request的发送、接收、开始处理、处理完成、发送response的任何时间点,都可以估算剩余时间,如果剩余时间耗尽,在任何时间点都可以提前终止处理,因为client已经超时,再处理也没有意义了。这可以减少不必要的request发送、request在server端的处理、response发送。
  • 另外,这个问题其实与cancel是有关联的:google::protobuf::RpcController其实有一个StartCancel()接口,可以cancel掉一个request使其提前终止,其目的有共同之处。这个功能目前还没有实现,在TODO list中。

实现

参见pull request #84
但是还有改进的地方:

  • 目前client直接将total_timeout作为server_timeout传给server端,实际上,request在client端的sending buffer中的排队时间没有考虑进来,尤其是网络繁忙的时候排队时间可能会很长,这样server_timeout与实际剩余时间的差别就比较大,仍然会有不必要的response。在RpcClientStream::on_sending()中留了一个TODO就是这个意思。

RpcClientImpl 在调用stop的时候hung死

栈状态
#0 0x00007f1834e7b22d in pthread_join () from /lib64/libpthread.so.0
#1 0x00000000006215ab in sofa::pbrpc::ThreadGroupImpl::stop (this=0x228f6b0) at src/sofa/pbrpc/thread_group_impl.h:182
#2 0x00000000006176c7 in sofa::pbrpc::RpcClientImpl::Stop (this=0x22be000) at src/sofa/pbrpc/rpc_client_impl.cc:109

查看io_service的内存信息如下
(gdb) p *(boost::asio::detail::task_io_service * const) 0x22738e0
$26 = {boost::asio::detail::service_baseboost::asio::detail::task_io_service = {boost::asio::io_service::service = {boost::noncopyable_::noncopyable = {},
vptr.service = 0xa24a90 <vtable for boost::asio::detail::task_io_service+16>, key = {type_info_ = 0xa245a0 <typeinfo for boost::asio::detail::typeid_wrapperboost::asio::detail::task_io_service>,
id_ = 0x0}, owner_ = @0x228f6d0, next_ = 0x0}, static id = {boost::asio::io_service::id = {boost::noncopyable_::noncopyable = {}, }, }},
one_thread_ = false, mutex_ = {boost::noncopyable_::noncopyable = {}, mutex_ = {__data = {__lock = 0, __count = 0, __owner = 0, _nusers = 7, kind = 0, spins = 0, list = {
prev = 0x0, next = 0x0}}, size = '\000' <repeats 12 times>, "\a", '\000' <repeats 26 times>, align = 0}}, task = 0x2266e10,
task_operation
= {boost::asio::detail::task_io_service_operation = {next
= 0x0, func
= 0x0, task_result
= 0}, }, task_interrupted = false, outstanding_work = {value = 3},
op_queue = {boost::noncopyable::noncopyable = {}, front = 0x0, back = 0x0}, stopped = false, shutdown = false, first_idle_thread = 0x7f182991fce0}

我理解调用后stop函数后task_io_service 的 outstanding_work_变量会被减为0 并退出他的run函数。
从而使得pthread_join函数成功返回。可能的问题点在哪里呢?

Is 32 bit supported?

Hi folks,

This is really good work and I'd like to try it out. While looks like it doesn't compile on 32 bit platform, it failed with following error:

src/sofa/pbrpc/atomic.h: Assembler messages:
src/sofa/pbrpc/atomic.h:59: Error: invalid instruction suffix for `xadd'
make: *** [src/sofa/pbrpc/dynamic_rpc_channel_impl.o] Error 1

After adding -m64 flag to the Makefile, and installing 64 bit gcc library, it compiles but failed to link because there is no 64 bit protobuf library.

I could try it on a 64 bit platform, but just to want to check if 32 bit is supported or not. Thank you!

Regards
Xiangning

线程数据如何attach?

线程初始化回调函数不存入任何参数(ExtClosure<bool()>* work_thread_init_func; )求问线程数据如何attach上去呢?

resolve addr fail问题

rpc_channel对象初始化时会解析对端ip,如果解析失败会导致这个对象上后续所有的call_method全部失败,只能靠用户析构并重新创建channel。希望能在rpc层面解决一下,例如call_method时检查一下距上次解析失败的时间,适当的进行重试。

在mac下编译sample里面的例子error,怎么解决?

g++ echo_service.pb.o server.o -o server ../../output/lib/libsofa-pbrpc.a /Users/wangdesheng/Desktop/code/cpp_program/sofa-pbrpc/lib/protobuf/output/lib/libprotobuf.a /Users/wangdesheng/Desktop/code/cpp_program/sofa-pbrpc/lib/snappy/output/lib/libsnappy.a -L/Users/wangdesheng/Desktop/code/cpp_program/sofa-pbrpc/lib/zlib/output/lib -lpthread -lz
Undefined symbols for architecture x86_64:
"_ProfilerStart", referenced from:
sofa::pbrpc::Profiling::CpuProfilingFunc() in libsofa-pbrpc.a(profiling.o)
sofa::pbrpc::Profiling::DoCpuProfiling(sofa::pbrpc::Profiling::DataType) in libsofa-pbrpc.a(profiling.o)
"_ProfilerStop", referenced from:
sofa::pbrpc::Profiling::CpuProfilingFunc() in libsofa-pbrpc.a(profiling.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [server] Error 1

sample里面的例子,直接make报错

hi:

我试了在sample里面的echo例子,直接make会报ERROR: need snappy header的错误提示。我是在mac osx 10.10的机子上面跑的。

我按照install教程里面的地址,把snappy1.1.2包下回来,也安装成功了。sofa-pbrpc也是安装成功的,能正常跑起来。

我看了下make脚本,貌似是SNAPPY_DIR这个变量没设置到之类的。

生产环境中的上线切流量问题

Hi:
一直有个问题想咨询下您,我看到wiki上有描述了client是支持链接探活的(针对server端服务不可用的情况)。但是,如果用于生产环境,那么肯定存在一个上线时切流的问题,所以想问下,有没有类似于zookeeper的机制,能够保障主动切流量且不影响线上服务的feature。
谢谢

internal compiler error

g++ -O2 -pipe -W -Wall -fPIC -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -DHAVE_SNAPPY -Isrc -I/home/yan97ao/Downloads/boost_1_53_0 -I/home/yan97ao/Downloads/protobuf-2.4.1/output/include -I/home/yan97ao/Downloads/snappy-1.1.2/output/include -I/include -c -o src/sofa/pbrpc/block_wrappers.o src/sofa/pbrpc/block_wrappers.cc
g++ -O2 -pipe -W -Wall -fPIC -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -DHAVE_SNAPPY -Isrc -I/home/yan97ao/Downloads/boost_1_53_0 -I/home/yan97ao/Downloads/protobuf-2.4.1/output/include -I/home/yan97ao/Downloads/snappy-1.1.2/output/include -I/include -c -o src/sofa/pbrpc/boost_system_error_code.o src/sofa/pbrpc/boost_system_error_code.cc
src/sofa/pbrpc/boost_system_error_code.cc:440:1: internal compiler error: in function_and_variable_visibility, at ipa.c:929
Please submit a full bug report,
with preprocessed source if appropriate.
See file:///usr/share/doc/gcc-4.6/README.Bugs for instructions.
Preprocessed source stored into /tmp/ccG9Nrmn.out file, please attach this to your bugreport.
make: *** [src/sofa/pbrpc/boost_system_error_code.o] Error 1

tested in Ubuntu 12.04 gcc 4.6 and Ubuntu 14.04 gcc 4.8

[bug] 最后一个request会hang住或者超时

在使用sofa-pbrpc时发现server有时候会不响应client发出去的最后一个请求,打出function trace后发现是如下场景:

  1. rpc_server_message_stream::on_write_some,释放send token并触发try_start_send。
  2. try_start_send里获取send token成功,get_from_pending_queue从queue中拿pending response。
  3. 此时,async_send_message开始,put_into_pending_queue完成,触发try_start_send,获取send token失败,try_start_send直接完成。
  4. 步骤2中的get_from_pending_queue并不一定能拿到步骤3中入queue的response,当拿不到时,步骤2触发的try_start_send也不做任何动作,完成。
  5. 此时,pending_queue中的response永远没有机会被发送,直到下一次该connection需要发送response。

发现1.1.0和master上都有该问题,该问题的影响是,如果客户端的逻辑是blocking的(使用 blocking接口或者必须等待上一个request的response才会触发下一次rpc),最后一个request会hang住或者超时。

多谢 Xi Liu([email protected])报告该问题。

客户端与服务端连接释放问题请教

hi
客户端和服务端建立链接后 关闭服务端,发现客户端和服务的链接仍然处于ESTABLISHED状态
我看客户端有定时检测机制 超过keep_alive_time会主动发起关闭 但是没有生效,帮忙看下

谢谢

C++11编译运行Segmentation fault.

Hi,升级c++11以后,编译没问题,但是运行初始化client的时候会挂。

gdb如下:

Program terminated with signal 11, Segmentation fault.
#0 signal_and_unlockboost::asio::detail::scoped_lock<boost::asio::detail::posix_mutex > (this=0xd2ae4c80, lock=...) at thirdparty/boost/asio/detail/posix_event.hpp:60

60 signalled_ = true;
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.166.el6_7.3.x86_64 zlib-1.2.3-29.el6.x86_64

core文件bt如下:
#0 signal_and_unlockboost::asio::detail::scoped_lock<boost::asio::detail::posix_mutex > (this=0xd2ae4c80, lock=...) at thirdparty/boost/asio/detail/posix_event.hpp:60
#1 wake_one_idle_thread_and_unlock (this=0xd2ae4c80, lock=...) at thirdparty/boost/asio/detail/impl/task_io_service.ipp:484
#2 boost::asio::detail::task_io_service::wake_one_thread_and_unlock (this=0xd2ae4c80, lock=...) at thirdparty/boost/asio/detail/impl/task_io_service.ipp:493
#3 0x000000000054f32e in boost::asio::detail::task_io_service::init_task (this=0xd2ae4c80) at thirdparty/boost/asio/detail/impl/task_io_service.ipp:130
#4 0x0000000000552307 in init_task (this=0x2b0b918, io_service=Unhandled dwarf expression opcode 0xf3

) at thirdparty/boost/asio/detail/impl/epoll_reactor.ipp:145
#5 boost::asio::detail::deadline_timer_serviceboost::asio::time_traits<boost::posix_time::ptime >::deadline_timer_service (this=0x2b0b918, io_service=Unhandled dwarf expression opcode 0xf3

) at thirdparty/boost/asio/detail/deadline_timer_service.hpp:68
#6 0x00000000005523a6 in deadline_timer_service (owner=...) at thirdparty/boost/asio/deadline_timer_service.hpp:77
#7 boost::asio::detail::service_registry::create<boost::asio::deadline_timer_service<boost::posix_time::ptime, boost::asio::time_traitsboost::posix_time::ptime > > (owner=...) at thirdparty/boost/asio/detail/impl/service_registry.hpp:81
#8 0x000000000054c4a6 in boost::asio::detail::service_registry::do_use_service (this=0xd2add000, key=..., factory=0x552350 <boost::asio::detail::service_registry::create<boost::asio::deadline_timer_service<boost::posix_time::ptime, boost::asio::time_traitsboost::posix_time::ptime > >(boost::asio::io_service&)>)

at thirdparty/boost/asio/detail/impl/service_registry.ipp:123

#9 0x000000000055076e in use_service<boost::asio::deadline_timer_service<boost::posix_time::ptime, boost::asio::time_traitsboost::posix_time::ptime > > (this=0x2b1aac8, io_service=Unhandled dwarf expression opcode 0xf3

) at thirdparty/boost/asio/detail/impl/service_registry.hpp:48
#10 use_service<boost::asio::deadline_timer_service<boost::posix_time::ptime, boost::asio::time_traitsboost::posix_time::ptime > > (this=0x2b1aac8, io_service=Unhandled dwarf expression opcode 0xf3

) at thirdparty/boost/asio/impl/io_service.hpp:33
#11 boost::asio::basic_io_object<boost::asio::deadline_timer_service<boost::posix_time::ptime, boost::asio::time_traitsboost::posix_time::ptime > >::basic_io_object (this=0x2b1aac8, io_service=Unhandled dwarf expression opcode 0xf3

) at thirdparty/boost/asio/basic_io_object.hpp:90
#12 0x0000000000707791 in basic_deadline_timer (this=0x2b1aa80, io_service=...) at thirdparty/boost/asio/basic_deadline_timer.hpp:149
#13 sofa::pbrpc::TimerWorker::TimerWorker (this=0x2b1aa80, io_service=...) at ./sofa/pbrpc/timer_worker.h:27
#14 0x00000000006fc16f in sofa::pbrpc::RpcClientImpl::Start (this=0x2b26c00) at sofa/pbrpc/rpc_client_impl.cc:88
#15 0x00000000006fad5c in sofa::pbrpc::RpcClient::RpcClient (this=0xd29f76a0, options=Unhandled dwarf expression opcode 0xf3

) at sofa/pbrpc/rpc_client.cc:16
#16 0x000000000053dc6d in XXXX::XXXX::SofaClientInit (this=0x2b68000) at xxx/xxx/xxxx/XXXCore.cpp:347

f 16如下:

client_options_ = (new sofa::pbrpc::RpcClientOptions());
345 client_options_->work_thread_num = work_thread_num;
346 client_options_->callback_thread_num = callback_thread_num;
347 rpc_client_ = (new sofa::pbrpc::RpcClient(*client_options_));
348 sofa_client_ = (new sofa::client::SofaClient<XXX_Stub>());
349 sofa_client_->Init(rpc_client_,ip_list_str);

错误出现在347行

请问这是已知错误吗?有解决方案吗?

RpcServer 调用stop之后,程序core dump。

#0 0x00007f89817f4625 in raise () from /lib64/libc.so.6
#1 0x00007f89817f5e05 in abort () from /lib64/libc.so.6
#2 0x00007f898204fa55 in __gnu_cxx::__verbose_terminate_handler () at ../../.././libstdc++-v3/libsupc++/vterminate.cc:95
#3 0x00007f898204dbf6 in __cxxabiv1::__terminate (handler=) at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:38
#4 0x00007f898204dc23 in std::terminate () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:48
#5 0x00007f898204e6df in __cxxabiv1::__cxa_pure_virtual () at ../../.././libstdc++-v3/libsupc++/pure.cc:50
#6 0x0000000000631318 in sofa::pbrpc::RpcByteStream::on_connect(boost::system::error_code const&) ()
#7 0x000000000062aaed in boost::asio::detail::reactive_socket_connect_op<boost::bi::bind_t<void, boost::mfi::mf1<void, sofa::pbrpc::RpcByteStream, boost::system::error_code const&>, boost::bi::list2boost::bi::value<sofa::pbrpc::shared_ptr<sofa::pbrpc::RpcByteStream >, boost::arg<1> > > >::do_complete(boost::asio::detail::task_io_service, boost::asio::detail::task_io_service_operation, boost::system::error_code const&, unsigned long) ()
#8 0x000000000062dee9 in boost::asio::detail::epoll_reactor::descriptor_state::do_complete(boost::asio::detail::task_io_service
, boost::asio::detail::task_io_service_operation
, boost::system::error_code const&, unsigned long) ()
#9 0x000000000063081e in boost::asio::detail::task_io_service::run(boost::system::error_code&) ()
#10 0x000000000063118e in sofa::pbrpc::ThreadGroupImpl::thread_run(void*) ()
#11 0x00007f89827749d1 in start_thread () from /lib64/libpthread.so.0

options for the server

how many options defined in this framework?
what is more, can we get the some low level info from the RPC, likely the fd, peer IP, as we may need to do the stats on this items, Thanks!

client大包发送速率较低

client的发送速率受限于boost::socket的发送速率(async_write_some ~ on_write_some间的延迟),当发送较大的数据包时,如果数据跨越多个bufhandle,发送速率会成比例下降。可通过提高bufhandle的大小,降低buf数来提高大包发送速率。

PHP扩展设计

功能

以php扩展的方式提供给用户,用户使用扩展库在php程序中向server发起RPC调用

接口设计

用户接口

  • PHP-Protobuf库选用allegro/php-protobuf,并修改其pb接口生成工具,增加与RPC相关的Service和Method接口,用户定义proto后通过工具protoc-php.php生成php-protobuf接口文件,该接口中调用sofa-pbrpc php扩展的功能函数,实现RPC调用。
  • sofa-pbrpc php扩展为用户提供一个接口类PHPRpcServiceStub,工具生成的pb接口中rpc服务继承于该类,通过该类进行RPC调用,并获取调用的错误码以及错误提示。生成pb接口的Message部分与allegro/php-protobuf保持一致,以EchoServiceStub为例,用户的proto文件中service定义如下:
    service EchoServiceStub
    {
        rpc Echo(EchoRequest) returns(EchoResponse);
    }

生成Service/Method函数接口如下:

    class EchoServiceStub extends PHPRpcServiceStub
    {
        // 用户通过EchoServiceStub 类创建rpc服务实例
        function __construct($address);
        // 在stub中注册method
        private function RegisterEcho();
        // 设置rpc调用超时
        public function SetTimeout($timeout);
        // 发起RPC调用
        public function Echo($request, $response, $closure);
        // 获取错误码
        public function Failed();
        // 获取错误提示
        public function ErrorText();
    }

扩展接口

  • 扩展的工作流程分为三个部分
    1. 初始化RpcServiceStub:创建RpcChannel和RpcController等组件
    2. 用户Method注册:解析输入输出Descriptor,在method_board添加表项
    3. 用户传递的request pb结构转换成google 的pb结构
    4. 调用CallMethod发起RPC调用
    5. 将google pb结构的response转化成用户pb结构
      扩展暴露给PHP的接口如下:
   PHP_METHOD(PHPRpcServiceStub, InitService);
   PHP_METHOD(PHPRpcServiceStub, SetTimeout);
   PHP_METHOD(PHPRpcServiceStub, Failed);
   PHP_METHOD(PHPRpcServiceStub, ErrorText);
   PHP_METHOD(PHPRpcServiceStub, RegisterMethod);
   PHP_METHOD(PHPRpcServiceStub, InitMethods);
   PHP_METHOD(PHPRpcServiceStub, CallMethod);
  • 用户接口中__construct调用InitService创建RPC组件,对service初始化
  • RegisterEcho调用RegisterMethod和InitMethods,注册用户函数并初始化
  • Echo调用CallMethod完成RPC调用
  • SetTimeout调用SetTimeout设置超时
  • 用户接口中Failed和ErrorText分别调用扩展中Failed和ErrorText获取错误状态的错误信息

用户使用方法

  • 编译生成sofa_pbrpc.so扩展库,通过修改php.ini装载扩展
  • 使用protoc-php.php生成proto_pb.php文件,在代码中require该pb接口,sample会在扩展目录下给出。

发送缓冲区中的数据可能会发不完就被丢弃掉了

sofa::pbrpc::RpcServerMessageStream<>::try_start_send()
{
...
                // now send
                _sent_size = 0;
                bool ret = _sending_message->Next(
                        reinterpret_cast<const void**>(&_sending_data), &_sending_size);
                SCHECK(ret);
                async_write_some(_sending_data, _sending_size);
                started = true;
                break;
...
}

上面的try_start_send函数调用应该循环多次调用Next,否则可能会造成_sending_message中的数据未发送完成。

sofa::pbrpc::RpcMessageStream<>也有同样的问题

curl请求sofa rpc server速度很慢

echo sample里面的client_http.sh,将post的"Hello, world!"替换为一个长度为1k字节的字符串,响应时间会增加到两秒钟左右。

连续发送较大包(4MB)无法准确限速

我通过max_throughput_out配置为10(限速为10MB/s)企图将发包的网络速度控制在10MB/s,但发现速度控制不住,远大于这个值。在我的环境不配置时速度为70MB/S,配置max_throughput_out为10后速度为42MB/S。

请问这个问题可以修正吗?

ubuntu的编译问题

boost_system_error_code.cc
要把这里面的匿名namespace 注释掉才行,不然触发gcc 4.6的bug,编译通不过。

atomic.h 里面 atomic_inc_ret_old64 在32位机器编译通不过,改用__sync_fetch_and_add 就可以,为啥不直接用gcc内置的__sync_fetch_and_add那些函数呢。

protocol buffer那个最好改成make时生成吧。

php版本支持

请问一下 后续php版本有支持安排么? php虽然可以通过http访问解决 但是像压缩这种需求还是不能很好支持吧

mac版编译失败

CXXFLAGS ?= -DSOFA_PBRPC_ENABLE_DETAILED_LOGGING

导致mac版编译失败

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.