Code Monkey home page Code Monkey logo

Comments (4)

hickscorp avatar hickscorp commented on August 16, 2024 1

@cabol this is such an extensive answer, thanks a lot for taking the time.
We can close the issue!

from nebulex.

cabol avatar cabol commented on August 16, 2024

Hi!

It is a good question. You can have as many caches as you want in Nebulex using dynamic caches. Unfortunately, dynamic caches is not currently supported by the decorators, which means, the option you are using cache: {__MODULE__, :which_cache, []} must return the cache module to use. However, this will be possible in Nebulex v3 (which will be released soon), it supports returning a dynamic cache, which is exactly what you need here.

On the other hand, I'm not sure if I understand this statement:

"The goal is to have as many caches as users - for example - so waiting operations aren't shared between users."

What are the waiting operations you mention? I'm asking because I've had a similar scenario with a lot of users and a high workload, using one cache, and never had an issue, so, maybe I can help?

I stay tuned!

from nebulex.

hickscorp avatar hickscorp commented on August 16, 2024

Good afternoon @cabol and thanks for taking the time.

I understand that we might have to wait for v3 release to use dynamic (runtime) spun-up caches.

What are the waiting operations you mention? I'm asking because I've had a similar scenario with a lot of users and a high workload, using one cache, and never had an issue, so, maybe I can help?
It might be very much my misunderstanding of how Nebulex works under the hood. Let me run a scenario for you to tell me if I'm wrong.

  • Imagine a function that takes a user ID. This function is computationaly intensive. It takes a long time to complete.
  • We annotate this function with a decorator making it cacheable(CacheModule, key: user_id).

So far so good.

Now I'm assuming that the function runs inside the cache process - so when the cache is being primed, it's mailbox is waiting.
So if you have two different requests, from two different users, I'm assuming that the 2nd user is awaiting the end of the cache being primed for the 1st user to be able to continue?
Or is Nebulex spining up one GenServer under the hood per cache key?...

That's it :) Hopefully you understand better what I meant and maybe you can immediatelly spot the flaw in my reasoning?

from nebulex.

cabol avatar cabol commented on August 16, 2024

Hi @hickscorp!

Now I'm assuming that the function runs inside the cache process - so when the cache is being primed, it's mailbox is waiting.
So if you have two different requests, from two different users, I'm assuming that the 2nd user is awaiting the end of the cache being primed for the 1st user to be able to continue?
Or is Nebulex spining up one GenServer under the hood per cache key?...

Right. Let me try to explain. This depends 100% on the cache adapter (and the cache implementation underneath). But none of the Nebulex adapters behave that way. For example, the local adapter (same as the Cachex adapter) uses ETS tables underneath, so when you hit the cache it goes directly to the table (backend), there is no single process channeling the cache commands, it hits the backend directly leveraging the concurrency capabilities of the ETS tables. The adapters were designed to avoid a single process bottleneck. But again, this depends entirely on the adapter. You can build an adapter for another cache that implements a single process strategy, and in that case, it may be a problem as you mentioned. So, you don't need to worry about that because that is not going to happen, it is not how Nebulex and its adapters work.

Please let me know if it makes sense, I know it may be confusing 😅 . I stay tuned!

from nebulex.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.