Code Monkey home page Code Monkey logo

nifty's Introduction

Project Status: ๐Ÿšจ Unmaintained ๐Ÿšจ

This project is archived and no longer maintained. At the time of archiving, open issues and pull requests were closed and tagged with 2018-05-archive. For pre-existing users who need an open source alternative, we recommend taking a look at airlift/drift.

Nifty

Nifty is an implementation of Thrift clients and servers on Netty.

It is also the implementation used by Swift.

Examples

To create a basic Thrift server using Nifty, use the Thrift 0.9.0 code generator to generate Java stub code, write a Handler for your service interface, and pass it to Nifty like this:

public void startServer() {
    // Create the handler
    MyService.Iface serviceInterface = new MyServiceHandler();

    // Create the processor
    TProcessor processor = new MyService.Processor<>(serviceInterface);

    // Build the server definition
    ThriftServerDef serverDef = new ThriftServerDefBuilder().withProcessor(processor)
                                                            .build();

    // Create the server transport
    final NettyServerTransport server = new NettyServerTransport(serverDef,
                                                                 new NettyServerConfigBuilder(),
                                                                 new DefaultChannelGroup(),
                                                                 new HashedWheelTimer());

    // Create netty boss and executor thread pools
    ExecutorService bossExecutor = Executors.newCachedThreadPool();
    ExecutorService workerExecutor = Executors.newCachedThreadPool();

    // Start the server
    server.start(bossExecutor, workerExecutor);

    // Arrange to stop the server at shutdown
    Runtime.getRuntime().addShutdownHook(new Thread() {
        @Override
        public void run() {
            try {
                server.stop();
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
        }
    });
}

Or the same thing using guice:

public void startGuiceServer() {
    final NiftyBootstrap bootstrap = Guice.createInjector(
        Stage.PRODUCTION,
        new NiftyModule() {
            @Override
            protected void configureNifty() {
                // Create the handler
                MyService.Iface serviceInterface = new MyServiceHandler();

                // Create the processor
                TProcessor processor = new MyService.Processor<>(serviceInterface);

                // Build the server definition
                ThriftServerDef serverDef = new ThriftServerDefBuilder().withProcessor(processor)
                                                                        .build();

                // Bind the definition
                bind().toInstance(serverDef);
            }
        }).getInstance(NiftyBootstrap.class);

    // Start the server
    bootstrap.start();

    // Arrange to stop the server at shutdown
    Runtime.getRuntime().addShutdownHook(new Thread() {
        @Override
        public void run() {
            bootstrap.stop();
        }
    });
}

nifty's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nifty's Issues

Publish jars to OSS Snapshots

This will make sure that people trying out facebook/swift 0.14.0-SNAPSHOT can get the dependent JARs easily. For now had to manually clone the repo and build locally.

cancelAllTimeouts?

https://github.com/facebook/nifty/blob/ba507e1c4b446cde0fe9a16ebdbcc7664840a41e/nifty-client/src/main/java/com/facebook/nifty/client/AbstractClientChannel.java#L397

private void onSendTimeoutFired(Request request)
    {
        cancelAllTimeouts();
        WriteTimeoutException timeoutException = new WriteTimeoutException("Timed out waiting " + getSendTimeout() + " to send data to server");
        fireChannelErrorCallback(request.getListener(), new TTransportException(TTransportException.TIMED_OUT, timeoutException));
    }

Why cancel all timeouts when one request timeout?

nifty server leaks timers

The ThriftServerModule binds a nifty timer out of a Provides method:

@Provides
@ThriftServerTimer
@Singleton
public Timer getThriftServerTimer()
{
    return new NiftyTimer("thrift");
}

and the Timer is annotated with @PreDestroy. However, it gets never hooked in the lifecycle and therefore when the server is shut down, it leaks its timer. This is not a big deal for an actual server but for Unit tests which start and stop a lot of services, they accumulate timers (and associated resources).

Upgrade to Netty 4

Netty 4 is nearing completion so, it would be lovely to see that up for use.

Maven 3.1 can not mvn install?

Ubuntu 12 clean install nifty using maven 3.1,errors occur:
Caused by: java.lang.ClassNotFoundException: org.sonatype.aether.version.VersionConstraint
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:230)
... 64 more

Consider adding Closable to NiftyClient

NiftyClient currently has a shutdown method. If you were to add Closable to Nifty and rename this method to close, the NiftyClient could be used in Java 7 try with resources statements.

Support for multiplex protocol

We haven't looked into whether this functionality from apache works on nifty yet, we should check whether it works and make any necessary adjustments.

Get better time stats

For example, how long a request spends in the queue until it starts being processed

SSL Support?

What is the recommended way of adding SSL to Nifty?

Proper timeout (and other exceptions) handling

Sometimes server fails with the following exception:

EXCEPTION, please implement org.jboss.netty.handler.codec.http.HttpChunkAggregator.exceptionCaught() for proper handling.
org.jboss.netty.channel.ConnectTimeoutException: connection timed out: /10.11.10.35:8082
    at org.jboss.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:137)

I would be nice to be able to handle it as netty suggests

jdk8

if my project use jdk7 , how I use nifty

Some types of exception should not close the channel

Currently any exception thrown in the channel pipeline will close the client connection. There are some types of exceptions that we could respond to with a TApplicationException instead of closing the channel (e.g. TooLongFrameException... we can't send the correct response if it's too long, but we can send a TApplicationException indicating the method failed instead of closing the channel.

Channel Statistics

What is the best way to expose Channel Statistics for gauging metrics (to ultimately log to Graphite)? Ideally, this would be implemented without modifying or duplicating existing Nifty classes.

Include Host and Port in connection refused exception message

It would be helpful if the connection refused exception message contained the target host and port. For example:

org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused
    at com.facebook.nifty.client.NiftyClient.connectSync(NiftyClient.java:117)
    ....
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:396)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:358)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:274)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    ... 1 more

NettyServerConfigBuilder instead of NettyConfigBuilder

There is a small mistake on the README.md

    // Create the server transport
    final NettyServerTransport server = new NettyServerTransport(serverDef,
                                                                 new NettyConfigBuilder(),
                                                                 new DefaultChannelGroup(),
                                                                 new HashedWheelTimer());

replace by

    // Create the server transport
    final NettyServerTransport server = new NettyServerTransport(serverDef,
                                                                 new NettyServerConfigBuilder(),
                                                                 new DefaultChannelGroup(),
                                                                 new HashedWheelTimer());

Client support to support dynamic host set

Client object can only connect to one host and generally we initiate a single client and use that across client application. But usual practice has multiple set of servers for every service. Is there a support for dynamic host set in the client object with zookeeper monitor? Or is it going to supported?

buffer overflow? `Maximum frame size of -2147483648 exceeded `

I'm sending a thrift struct of < 300MB.

Versions:

libraryDependencies ++= Seq(
  "com.facebook.nifty" % "nifty-core" % "0.18.0",
  "com.facebook.nifty" % "nifty-client" % "0.18.0",
  "com.facebook.swift" % "swift-service" % "0.18.0",
  "com.facebook.swift" % "swift-codec" % "0.18.0",
  "com.facebook.swift" % "swift-annotations" % "0.18.0"
)

Using java8 VM, oracle JDK.

Thrift Server Setup:

    import io.airlift.units.DataSize.Unit.GIGABYTE;
    import io.airlift.units.DataSize
    val serverConfig = new ThriftServerConfig()
      .setBindAddress(config.schedulerEndpoint.ip)
      .setPort(config.schedulerEndpoint.port)
      .setMaxFrameSize(new DataSize(2, GIGABYTE)) // the default is 64MB

    val server = new ThriftServer(processor, serverConfig)

    Runtime.getRuntime.addShutdownHook(new Thread() {
      override def run() = {
        Try(server.close) match {
          case Success(p) => logger.info(s"Successfully stopped server")
          case Failure(exn) => logger.error(s"Error stoping service $exn")
        }
      }
    });
    server.start

The overflow?

org.jboss.netty.handler.codec.frame.TooLongFrameException: Maximum frame size of -2147483648 exceeded
        at com.facebook.nifty.codec.DefaultThriftFrameDecoder.tryDecodeFramedMessage(DefaultThriftFrameDecoder.java:102)
        at com.facebook.nifty.codec.DefaultThriftFrameDecoder.decode(DefaultThriftFrameDecoder.java:68)
        at com.facebook.nifty.codec.DefaultThriftFrameDecoder.decode(DefaultThriftFrameDecoder.java:33)
        at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
        at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at com.facebook.nifty.codec.DefaultThriftFrameCodec.handleUpstream(DefaultThriftFrameCodec.java:42)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at com.facebook.nifty.core.ChannelStatistics.handleUpstream(ChannelStatistics.java:79)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.messageReceived(SimpleChannelUpstreamHandler.java:124)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.messageReceived(SimpleChannelUpstreamHandler.java:124)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)

Things I've tried:

  • Upgraded everything from 0.15.1 to 0.18.0 nifty & swfit.
  • Regenerated the python structs (client) & swift (with the facebook swift generator)
  • Problem only visible for big structs > 90MB
  • I also tried overriding my own thrift frme codec factory like this:
    import com.facebook.nifty.codec.{
      ThriftFrameCodecFactory,
      DefaultThriftFrameCodec}
    import org.apache.thrift.protocol.TProtocolFactory
    import org.jboss.netty.channel.ChannelHandler

    val frameFactory = new ThriftFrameCodecFactory(){
      override def create(maxFrameSize: Int,
        defaultProtocolFactory: TProtocolFactory): ChannelHandler = {
        import com.google.common.base.Verify.verify
        verify(maxFrameSize > 0, s"Frame size ($maxFrameSize) is negative!")
        new DefaultThriftFrameCodec(size, defaultProtocolFactory)
      }
    }

    import com.facebook.nifty.core.NiftyTimer
    import com.google.common.collect.ImmutableMap
    import org.apache.thrift.protocol.TBinaryProtocol
    import com.facebook.nifty.codec.ThriftFrameCodecFactory
    import com.facebook.nifty.duplex.TDuplexProtocolFactory

    val server = new ThriftServer(processor,
      serverConfig,
      new NiftyTimer("thrift"),
      ImmutableMap.of("framed",
        frameFactory.asInstanceOf[ThriftFrameCodecFactory]),
      ImmutableMap.of("binary",
        TDuplexProtocolFactory.fromSingleFactory(new TBinaryProtocol.Factory())),
      ThriftServer.DEFAULT_WORKER_EXECUTORS,
      ThriftServer.DEFAULT_SECURITY_FACTORY)

However, this fails trying to create teh ChannelHandler

        verify(maxFrameSize > 0, s"Frame size ($maxFrameSize) is negative!")

I'm using the thrift 0.9.2 compiler for python, the swift 0.18.0 compiler for java.

Any tips, suggestions would be greatly appreciated. Note that this doesn't happen between a C++ service and the python client.

Thanks in advance!

  • Alex

HTTP content length exceeded 16777216 bytes

I am using Swift and setting up a thrift client using Nifty HttpClientConnector in the following way.

HttpClientConnector connector = new HttpClientConnector(URI.create("http://xxxxx:8181/trackerservice"));

ThriftClientManager clientManager = new ThriftClientManager(new ThriftCodecManager());
thriftClient = clientManager.createClient(connector, TrackerService.class).get();

When calling remote procedures which returns a large result set I get an exception:
org.apache.thrift.transport.TTransportException: org.jboss.netty.handler.codec.frame.TooLongFrameException: HTTP content length exceeded 16777216 bytes

How can I change max frame size on my connection?

NiftyDispatcher needs to handle RejectedExecutionException

It seems NiftyDispatcher does not catch RejectedExecutionException if submit() fails. This could happen, say, the exe configured by the application code is using a bounded queue. Without this we cannot prevent a thrift server already having performance issues from going into an OOM state, further exacerbating the problem.

NiftyExceptionLogger.java probably shouldn't log at error severity for broken connections

nifty-core/src/main/java/com/facebook/nifty/core/NiftyExceptionLogger.java contains the following line:

log.error(exceptionEvent.getCause(), "Exception triggered on channel connected to %s", remoteAddress);

It is probably not at the error severity to log a broken connection. It would show up as ERROR on log4j based logs. This class of severity is usually reserved for things that shouldn't happen in production, but broken connection happens all the time in normal operations.

Can't change name of boss/worker threads

Name of all boss threads is "nifty-server-boss-%d". Name of all worker threads is "nifty-server-worker-%d". I can't add identifier for the threads. How can I change the name of the threads? I need a way to set niftyName of NettyConfigBuilderBase.

Expose request queue metrics

In order to monitor a service, a number of counters need to be readable. Currently, almost nothing is accessible. Active_worker_threads and request_queue_size can be read using the ThreadPoolExecutor.
Num_connection is accessible nettyServerTransport.getMetrics().getChannelCount(), but only after hacking the visibility of some field with reflection. Average_queue_time and num_killed_requests (because of expireTimeout) are not accessible at all.
Please provide an API to easily access to the following counters:

  • average_queue_time
  • num_killed_requests
  • num_connection

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.