Code Monkey home page Code Monkey logo

Comments (7)

n8han avatar n8han commented on June 16, 2024

I am also using scalaz and its EitherT monad transformer to deal with Promise[Either[A, B]] in a sane fashion. Perhaps I am misusing it here?

I doubt it could be at fault.

I expected (in my naiveté, perhaps), that the AsyncHttpClientCallback threads would be short-lived, until the promise was fulfilled, and the value used for the next request.

It's really up to the thread pool in use. Using one request as input of another shouldn't interfere with anything, since the new requests are queued in for the I/O worker and the callback will complete without any blocking.

The actual result is that the AsyncHttpClientCallback threads from the earlier requests are used for the later requests, and not returned to the pool, or killed, or anything. Before long, I have 1000 threads running.

Is it the thread pool that you can specify in the client config?

This should be used for all the future listeners that dispatch.Promise attaches on any map/flatMap/foreach. The only other threads in play are the I/O workers, of which there should only be as many as you have cpu cores. So, my money is on the applicationThreadPool. I would trying specifying your own configured executor to see if you can make it behave the way you expect.

Letting the pool grow to 1000 seems like maybe not the best default on the part of async-http-client and would only be appropriate if you're doing a lot of blocking I/O, like jdbc, after handling the request. If your callbacks are all cpu bound then it's really just a lot of overhead to manage those threads.

If you're able to get the results you want by configuring the executor, then perhaps we should change the defaults in dispatch to be something more middle of the road.

from reboot.

romanb avatar romanb commented on June 16, 2024

As far as I know the executor service that can be set in the AHC client config is actually set as the Netty I/O worker pool and the maximum number of workers by default capped at config.getIoThreadMultiplier() * Runtime.getRuntime().availableProcessors(), as can be seen here.

The default multiplier is 2

Unless a custom executor service or thread multiplier has been configured with AHC, or maybe even a different provider is used that makes it even more dubious where these 1000 threads are coming from.

from reboot.

n8han avatar n8han commented on June 16, 2024

AHC does create its own executor in AsyncHttpyClientConfig. I haven't double-checked that that statement is actually being hit, but it seems most likely.

It's a standard cached threadpool which some people say is evil because it has no upper bound. Indeed, 1000 threads could be just the beginning!

The above is all true I think we should set our own default, to something bounded.

from reboot.

romanb avatar romanb commented on June 16, 2024

Yes, they default to a cached thread pool but they pass it to the NioClientSocketChannelFactory constructor with 3 arguments, specifying a maximum number of workers as I wrote above, maybe netty is not adhering to it, I don't know. In any case, the netty docs do state that it is best to specify a cached thread pool. My assumption so far has been that Netty itself decides how many threads it will use as workers, unless you specify an explicit limit, as is done in AHC.

from reboot.

n8han avatar n8han commented on June 16, 2024

Thanks for explaining that, I see where the problem is now. Dispatch reuses client.getConfig.executorService when creating any ListenableFuture; unlike in the worker pool, this use is unbounded.

If we were to continue to share this executor we would need to throttle the jobs submitted to it in some way, or depart from Netty's recommendation to use an unbounded cached thread pool. Instead of doing that, let's use a separate executor for listenable futures (which is how it was done early in reboot development) and default to a fixed thread pool of a reasonable size. People who aren't doing any blocking in their promises should be encouraged to set this to something like 2 times their number of cores.

from reboot.

softprops avatar softprops commented on June 16, 2024

Would something like the ThrottleRequestFilter help for throttling requests?

from reboot.

n8han avatar n8han commented on June 16, 2024

I don't think we want to throttle the number of requests. Using a bounded threadpool for the future listeners most directly address the problem described, and is easy to do. (And it's done.)

from reboot.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.