Code Monkey home page Code Monkey logo

linkage's People

Contributors

xusleep avatar

Stargazers

 avatar

Watchers

 avatar  avatar

linkage's Issues

最近学习Dubbo, Dubbo的调用机制和我的实现基本一致,世界上就是有如此多巧合的事情

地址:http://itindex.net/detail/50672-dubbo-%E6%9C%AC%E5%8E%9F

Dubbo缺省协议采用单一长连接和NIO异步通讯,
适合于小数据量大并发的服务调用,以及服务消费者机器数远大于服务提供者机器数的情况
分析源代码,基本原理如下:
client一个线程调用远程接口,生成一个唯一的ID(比如一段随机字符串,UUID等),Dubbo是使用AtomicLong从0开始累计数字的
将打包的方法调用信息(如调用的接口名称,方法名称,参数值列表等),和处理结果的回调对象callback,全部封装在一起,组成一个对象object
向专门存放调用信息的全局ConcurrentHashMap里面put(ID, object)
将ID和打包的方法调用信息封装成一对象connRequest,使用IoSession.write(connRequest)异步发送出去
当前线程再使用callback的get()方法试图获取远程返回的结果,在get()内部,则使用synchronized获取回调对象callback的锁, 再先检测是否已经获取到结果,如果没有,然后调用callback的wait()方法,释放callback上的锁,让当前线程处于等待状态。
服务端接收到请求并处理后,将结果(此结果中包含了前面的ID,即回传)发送给客户端,客户端socket连接上专门监听消息的线程收到消息,分析结果,取到ID,再从前面的ConcurrentHashMap里面get(ID),从而找到callback,将方法调用结果设置到callback对象里。
监听线程接着使用synchronized获取回调对象callback的锁(因为前面调用过wait(),那个线程已释放callback的锁了),再notifyAll(),唤醒前面处于等待状态的线程继续执行(callback的get()方法继续执行就能拿到调用结果了),至此,整个过程结束。
当前线程怎么让它“暂停”,等结果回来后,再向后执行?
答:先生成一个对象obj,在一个全局map里put(ID,obj)存放起来,再用synchronized获取obj锁,再调用obj.wait()让当前线程处于等待状态,然后另一消息监听线程等到服 务端结果来了后,再map.get(ID)找到obj,再用synchronized获取obj锁,再调用obj.notifyAll()唤醒前面处于等待状态的线程。
正如前面所说,Socket通信是一个全双工的方式,如果有多个线程同时进行远程方法调用,这时建立在client server之间的socket连接上会有很多双方发送的消息传递,前后顺序也可能是乱七八糟的,server处理完结果后,将结果消息发送给client,client收到很多消息,怎么知道哪个消息结果是原先哪个线程调用的?
答:使用一个ID,让其唯一,然后传递给服务端,再服务端又回传回来,这样就知道结果是原先哪个线程的了。

请求无法及时响应

一个连接下面发送了很多请求时,会造成请求无法及时响应,
考虑接受一定数量请求后切换为可写状态,这样请求便能及时响应了。

注册中心单点问题

系统中目前只有单点注册中心,会出现注册中心当机后,新访问客户端无法获取到数据。
因此需要增加备用注册中心功能,由多个注册中心协同合作。一个注册中心失败后,可以立马转移到备用注册中心。

add the connection pool to the client to solve the load balance issue.

client may connect to the same service which will lead to the load balance issue.
solution:
client connect to all of the service when getting the service list, then we maintain the connection pool
when request the service we don't could request any one of them by using the client route function.

服务器端客户端死锁问题

当客户端并发请求大量的请求时,服务器端会停止处理请求,后面接收到的请求也会因为前面的请求,而无法处理,初步判定是发生了死锁的问题。

How to drop the unwanted message

We are reading the packet by below ways,
8Byte | 1Byte | Data
Packet length data type data

What if you send the wrong data to the server.
Then the server will not complete the reading, because server will read the data until the characters number
reached to packet length, if not it will keep reading. It will be a dead loop.
How to avoid it ?

Move the route module into the configure management center

Move the route module into the configure management center, make the service framework clean

changes below,
1, remove the configuration of route in the property file.
2, add in the route module in the configure management
3, change the hierarchy of the consume
4, add new DefaultRouteConsume which extends DefaultConsume, in
this class we add the service center route logic here
5, add new bootstrap of client for the configurement center purpose.

客户端从服务注册中心获取到服务后,将使用订阅模式,保持对该服务的服务列表更新的被通知

  1. 客户端从服务注册中心获取到服务后,将使用订阅模式,保持对该服务的服务列表更新的被通知。
  2. 服务注册到服务中心后,保持服务和服务中心的心跳检测,如果有变化的话,将变化通知给感兴趣的客户端(即获取过该服务的客户端)。
  3. 心跳将使用长连接保持,现有的是使用短链接, 必须改进。使用短连接不能保证较为实时的服务器状态信息,且效率较低。

Read count is 0

When read the message, we would read the message util the message characters number is equals to
the message length. But some times it could be some problems in the client, the data will not arrive on time. Did it cause the server read the bytes frequently when read count is 0 ? (I think this will happen). Did it cause the server break down as well when read count is 0 ? (won't happen)

The cloud computing by using this service framework

Refer to Google Map Reduce solution of Big Data, will also implement a cloud Big Data computing framework.
By using this framework, the job will reduce to the distributed computers. After all of the computing in these computer finished, will gather the result together.

小文件处理可用方案(云存储可用方案)

一般的小文件存储做法是对小文件做合并,比如一次性申请一个固定的大文件,然后同步的将多个小文件持续的写入到大文件里,这样做是为了是多个小文件的随机写转换为流式的顺序写,因为随机写会有很多的磁盘寻道时间和旋转时间,导致性能会有瓶颈。小文件合并的做法需要做大量的索引,文件操作会更复杂一些,在第一期将不会对小文件做合并,而是直接对文件系统写入,为了解决随机读写导致的磁盘瓶颈,提升吞吐量,解决方式是单个datanode挂载多个磁盘,后期将会实现小文件合并策略,具体的存储方式对业务系统是透明的,后面切换了存储策略,业务系统不需要修改。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.