Code Monkey home page Code Monkey logo

Comments (9)

benji6 avatar benji6 commented on May 31, 2024

The idea of consumeIterable was simply to iterate through whatever iterable each approach produced. It wasn't really about creating an array - it could be a for of loop with some sort of IO or anything really. The idea is that when you have created your collection (an array if you're using transducers or native methods and a lazy iterable using imlazy) you then need to do something with it. If you just measure the creating of the collection then imlazy will easily beat the others because it does no work. But I thought to be fair the benchmark should measure the time to create the collection and the time to iterate over the collection that was created. So with transducers we measure the time to create the array then to iterate over it and with imlazy we measure the time to create the lazy iterable and the time to iterate over it. Does that make sense?

Hopefully that's a reasonable approach but in either case imlazy isn't performing so well these days!

from imlazy.

xgbuils avatar xgbuils commented on May 31, 2024

Yes! it sounds good.

However I could suppose that transducers is a way to separate data structures and transformations. Then, a transducer is just the part that composes transformations, and the iteration is the application of the transformation to the data structure (in this case, the iterable). So, applying maps and filters over an iterable is analogous to compose transformations. And, then, iterate over the created iterable is applying the transformations.

I mean that if we have const iterable = [1, 2, 3, 4], then:

const newIterable = I.map(e => 2 * e, I.map(e => 3 * e, iterable))

could be analogous to:

const transform = R.compose(e => 3 * e, e => 2 * e)

and

const array = []
for (const val of newIterable) {
     array.push(val)
}

analogous to:

const array = []
for (const val of iterable) {
    array.push(transform(val))
}

The first code is consumeIterable(newIterable), the second code is R.into([], R.map(transform), iterable)

from imlazy.

benji6 avatar benji6 commented on May 31, 2024

Yeah I see what you're saying and you have a point. I still feel a bit unsure because if you wanted to use transducers as a replacement for lazy iterables you would still need to create an iterable before you pass it to a third-party library or iterate over it doing some IO. But I guess I can go ahead and remove the consumeIterable function from the benchmarks because if nothing else it will make them a little simpler

from imlazy.

benji6 avatar benji6 commented on May 31, 2024

raised a PR here #33 the results are kind of interesting, I'm beginning to realise that benchmarking isn't exactly a science and trying to figure out how performant a library is going to be in the real world isn't exactly straightforward

from imlazy.

xgbuils avatar xgbuils commented on May 31, 2024

Yes, I think that we need more analysis. In the first instance it seems that using iterable approach for transformations is worse than transducer approach. Either we need to investigate more. For example, in my computer an array of 1000 elements that applies 35 maps transformations has better performance using imlazy than using ramda.

from imlazy.

benji6 avatar benji6 commented on May 31, 2024

Yeah - I think the v8 engine is designed to behave differently depending on what hardware it's running on. So for a benchmark to be really informative we'd probably have to run it on multiple devices. But also the operations in my benchmarks are really quite arbitrary and like you say - different operations yield different results so who knows what solution is best in the real world. Different versions of node give wildly different results also.

I might actually delete the benchmarks section in the readme because I'm not sure how useful it is. I think all I take away from it is that all the different approaches perform relatively similarly and if you need super performant code you should probably write it imperatively or use a different solution like Web Assembly or native node modules.

from imlazy.

xgbuils avatar xgbuils commented on May 31, 2024

Yeah - I think the v8 engine is designed to behave differently depending on what hardware it's running on. So for a benchmark to be really informative we'd probably have to run it on multiple devices. But also the operations in my benchmarks are really quite arbitrary and like you say - different operations yield different results so who knows what solution is best in the real world. Different versions of node give wildly different results also.

I think it's not all so relative. Of course, the benchmarks depend on the devices. However I'm sure that the cost of doing:

let iterable = getSomeIterable()
for (let i = 0; i < numOfMaps; ++i) {
    iterable = I.map(e => 2 * e, iterable)
}
[...iterable]

is linear related to the numOfMaps applied and the size of the iterable. I mean, if:

  1. imlazy is worse than ramda with 3 maps and an iterable with 1000 items (in my machine).
  2. imlazy is better than ramda with 35 maps and an iterable with 1000 items (in my machine).

Then imlazy will improve compared with ramda as number of maps increases. In my machine is better from 35 maps. In another machine might be better from 20 or 100 maps. But the relevant thing is that imlazy is better than ramda from some threshold to Infinity.

Another question is if there is a useful solution which uses 35 maps. Maybe it's possible to improve a little bit more the performance to be useful. I don't know.

Anyway I think it's not all relative. We have a time formula A * numOfOps + B * size + C and I think that we can still play with the implementations to try to reduce A, B and C. I don't give up. Soon I want to show some project to measure the different benchmarks depending on the sizes, methods and libraries.

Cheers!

from imlazy.

benji6 avatar benji6 commented on May 31, 2024

That would be cool to see! :)

from imlazy.

benji6 avatar benji6 commented on May 31, 2024

πŸŽ‰ This issue has been resolved in version 6.4.0 πŸŽ‰

The release is available on:

Your semantic-release bot πŸ“¦πŸš€

from imlazy.

Related Issues (19)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.