Code Monkey home page Code Monkey logo

Comments (38)

josiahls avatar josiahls commented on August 14, 2024 28

I've personally liked the idea behind data pipes and the newer data loader. It'll be helpful to know what are some examples of use cases where this concept/API broke down?

I think this would be helpful for people about to jump into torchdata to know the breaking limitations, because it is not obvious to me at least.

from data.

rbavery avatar rbavery commented on August 14, 2024 23

Our ML team have been avid users of torchdata. We have used it to build datapipes that fetch large raster datasets from cloud providers to support training and inference. Currently we have a couple projects using torchdata, the most robust is
https://zen3geo.readthedocs.io/en/latest/walkthrough.html a library managed by my colleague @weiji14 for fetching and batching large satellite images, with steps organized as functional datapipe ops.

While the API makes it easier to reuse custom data operations, we've been running into some consistent pain points when integrating datapipes with Dataloader V1 or Dataloader V2. We've had to switch back to Dataloader V1 and Datasets.

It doesn't seem like there is a clear set of documented rules for prefetching, shuffling, buffer sizes, memory pinning that results in good performance, or even any performance gains that beats single process dataloading when using torchdata with either Dataloader. All the configurations we have tried result in hangups, out of memory errors, or slower performance than a single process. It's also unclear how these parameters interact with different reading services.

I would love to see better docs and functionality for setting prefetching, shuffling, etc. with different kinds of reading services. Being able to profile datapipes and inspect RAM and cpu consumption of each operation would also be invaluable.

from data.

vincenttran-msft avatar vincenttran-msft commented on August 14, 2024 15

Thanks for the update @laurencer Laurence.

This is unfortunate news to hear as we over here at Microsoft's Developer Experience team have seen a lot of interest in cloud computing and seamless integration between Azure Storage and PyTorch from our customers. Our summer intern's project was building out a custom FileLoader and FileLister DataPipe that allowed easily interacting with datasets that are stored on Azure Storage, and so the news of a halt in development and an uncertain future of the torchdata repo makes for a difficult situation in regard to planning our future in terms of continuing development of integration with PyTorch.

With that being said, I am hopeful that this is a necessary step back in order to re-strategize and refine the future roadmap to ultimately end up with a better user experience for all. As the field of AI/ML continues to develop in the near future, it is really a matter of when (and not if) we will revisit building direct support for Azure Storage with PyTorch workflows, and so we will likely reach out sometime when the future of torchdata and dataloading is clearer. However, there are still some questions (and many were raised by previous posters above) that I would like to echo which would greatly help us developers in the interim of no new releases:

  1. What are the expectations of compatibility / should we even consider continuing to build out torchdata support, or is that something that is becoming obsolete in the near future?
  2. Any ETAs for when we will be updated on the status of the future of torchdata? While I understand this may be difficult, it would be great if any information could be shared in regard to timing to keep any developers from continuing to build out their infrastructure with torchdata, or if they should begin the migration process etc.

Thanks in advance, and please feel free to reach out if necessary!

from data.

nairbv avatar nairbv commented on August 14, 2024 14

Does the design re-evaluation apply only to TorchData, or also to the portion of the datapipes API that was upstreamed to PyTorch core?

https://github.com/pytorch/pytorch/tree/main/torch/utils/data/datapipes

from data.

andrew-bydlon avatar andrew-bydlon commented on August 14, 2024 11

I too would like to hear what limitations you are referencing.

If it is performance oriented, I believe there's an argument. You could make something compatible with a compiled framework, especially something like torch.compile. Someone recently talked about speedups loading tar files with rust as an example.

My question is what you recommend as an alternative platform to torchdata for flexible and fast dataloading in pytorch?

from data.

BlueskyFR avatar BlueskyFR commented on August 14, 2024 11

Also, do you still recommend to use torchData in the meantime or will the compatibility with torch break at some point so we should avoid using it?

from data.

npuichigo avatar npuichigo commented on August 14, 2024 9

As for me, the ideal data pipeline should be ergonomic, flexible and efficient.

Chainable iterator already shows its power in iterator algorithm libraries like itertools and more-itertools. torchdata choose to enhance that with a functional programming API, that's good. Actually rust Iterator algorithms provide a good list for common used pipeline, besides Filter and Map, there're also FilterMap and FlatMap and so on.

But flexibilty still has room for improvement. At least, torchdata should be comparible to pypeln which has good flexibility to switch or mix thread/process/coroutine based tasks.
image

For performance, can torchdata be comparible with huggingface dataset? Can we easily leverage arrow or something else to build a high-performance data pipeline? It's better to show more benchmarks on production data.

from data.

laurencer avatar laurencer commented on August 14, 2024 8

The short answer is we need to look at both. More holistically there's lots of benefits with datapipes & DataloaderV2, however we've seen some limitations in a few use-cases which indicate we may need to tweak them a bit (or they're not the one-stop solution that we were hoping for). Overall the data loading space is really important and we hear about a lot of pain-points so we want to make sure we get the core abstractions right.

from data.

BlueskyFR avatar BlueskyFR commented on August 14, 2024 8

IMO the first thing that comes to mind is TensorFlow's tf.data.Dataset API, which is super cool to use. It is deeply integrated in the framework and with Keras, and the different operations you "pipe"/chain together can be fused at runtime so that the input pipeline is more optimized.
However there are still some solutions such as NVIDIA DALI that is waaay faster if it applies to your use case.

I don't know TorchData in much details but I'd say building a nice-looking pipeline is easy, but making a pipeline optimized for high performance while still making it look cool is the real challenge.
By cool-looking I mean nice/easy to read as code, but also easily extensible, like users could share "databricks" together or something.

I hope this helps!

from data.

jaanli avatar jaanli commented on August 14, 2024 8

Hi @laurencer - any update on this?

from data.

talmo avatar talmo commented on August 14, 2024 6
  1. What are the expectations of compatibility / should we even consider continuing to build out torchdata support, or is that something that is becoming obsolete in the near future?
  2. Any ETAs for when we will be updated on the status of the future of torchdata? While I understand this may be difficult, it would be great if any information could be shared in regard to timing to keep any developers from continuing to build out their infrastructure with torchdata, or if they should begin the migration process etc.

+1, any updates on the roadmap since September?

from data.

hhoeflin avatar hhoeflin commented on August 14, 2024 5

As for data pipelining solution, it would be nice if this could be developed without a dependency on a deep learning framework (torch, tensorflow etc),

from data.

seanmcc-msft avatar seanmcc-msft commented on August 14, 2024 5

Any update here? My team is interested in creating an Azure Storage extension for PyTorch, similar to S3, but we cannot proceed with planning and implementation until we know what the future of PyTorch extensions will look like.

from data.

sehoffmann avatar sehoffmann commented on August 14, 2024 2

Hey, thanks for the update. Does that mean that torchdata will become obsolete in the future?

As I already indicated in older issues, what I see as the biggest weakness right now, is the lack of control and flexibility with regards to:

  • Shuffling
  • Sharding
  • Multiprocessing (dispatching to other processes etc.)

These things are right now tightly integrated into the torchdata core, and not easily accessible from user code. Giving User code the same Power and flexibility is paramount in my opinion to facilitate more complex pipelines than the vanilla cv pipeline. Or contrary, these functionalities should be implemented in user land without privileged handling from torchdata.

You can have a look at my repository (sehoffmann/atmodata) for some monkey patches that were necessary to facilitate my pipeline.

from data.

BarclayII avatar BarclayII commented on August 14, 2024 2

DGL team is currently studying what should the UX be for scaling deep learning on graphs, namely the sampling strategies. More specifically, we want to support customization on

  • Different graph storages (in-memory, disk, graph databases, etc.)
  • Different node/edge feature storages (in-memory, disk, etc.)
  • Sampling algorithms (online neighbor/subgraph sampling, offline sampling, etc.)
  • Downstream tasks (node classification, link prediction with negative sampling, graph classification, etc.)
  • Orchestration (whether to put sampling and feature fetching in multiprocessing/multithreading, how to schedule different stages, etc.).

So our current design depends on the composability of torchdata's DataPipes to allow for maximum extensibility, expressing the graph storages/feature storages/sampling algorithms/etc. as a composition of iterables and their transforms.

That being said, we are currently not pursuing active usages on DataLoader2 due to concerns on compatibility to existing packages depending on PyTorch DataLoader (e.g. PyTorch Lightning). That being said, we borrowed some ideas from ReadingService (namely the in-place editing of DataPipes).

We already have some demo in https://github.com/dmlc/dgl/tree/master/tests/python/pytorch/graphbolt.

Happy to discuss further.

from data.

npuichigo avatar npuichigo commented on August 14, 2024 2

I found an alternative to use Ray Data for data loading. It's framework-agnostic and performant, also providing a chainable API but with more fine-grained parallism control. I also write a tutorial to use together with HuggingFace Dataset here https://github.com/npuichigo/blazing-fast-io-tutorial.

from data.

coufon avatar coufon commented on August 14, 2024 2

I'd like to share our work https://github.com/google/space that supports materialized views in ML data pipelines. It ingests more metadata (versioning, column stats, logical plan) to ML datasets and pipelines, to provide a database/lakehouse like experience. Materialized views have the benefits of incrementally processing (and go back to old versions) and data lineage. Hope it will be useful.

from data.

BarclayII avatar BarclayII commented on August 14, 2024 1

@npuichigo I checked pypeln as well. It seems that the user needs to specify how to organize the queues in low-level (e.g. multiprocessing, multithreading, asyncio, etc.). Normally our UX shouldn't involve such a low-level specification unless the developers want to implement their own pipeline scheduling.

The original dataset API could be used with composition too, and I'm not sure exactly what challenges we'd face in doing so. I know we wouldn't have the functional helper functions but that seems minor, and not sure what else we'd be missing.

@nairbv Other than the functional helper functions, I find the in-place editing of DataPipe (namely torchdata.dataloader2.graph namespace) useful. For instance, with a single-processing DataPipe. I can have a DataLoader that changes the DataPipe for multi-processing and the process will be transparent to users. We also intend to apply the same idea for coordinate graph sampling, feature prefetching, and CPU-GPU transfer. https://github.com/dmlc/dgl/blob/master/python/dgl/graphbolt/dataloader.py#L58 shows an example.

Happy to discuss further.

from data.

erip avatar erip commented on August 14, 2024

Thanks for the update, @laurencer. Does this mean torchdata as a domain library or the entire concept of datapipes/DataLoader2 is under review?

from data.

biphasic avatar biphasic commented on August 14, 2024

I started to prototype a new version of my event data library on top of torchdata. The API is very clean and easy to understand, which is a strong plus point, even if it has a minor performance impact (I didn't verify that). I remember struggling with DataLoader2 and making multithreading work.

from data.

nairbv avatar nairbv commented on August 14, 2024

@npuichigo pypeln looks interesting. Based on there being multiple single-threaded queues between stages I assume this is designed for a single-node setup? PyTorch users would need multi-node support. To insert a queue between stages of a pipeline with multi-node stages, presumably we'd want to use some kind of purpose-built stand alone message queue. I'm not sure if that kind of setup is desirable -- once the training data reaches GPU hosts, I'd think we usually don't want to send it back elsewhere, so that architecture might make more sense for pre-processing.

@BarclayII

our current design depends on the composability of torchdata's DataPipes
not pursuing active usages on DataLoader2

I'd like to use DataPipes for some NLP problems for similar reasons, and have some prototypes. I'd like to get confirmation, but from what I can tell it seems like it may only be TorchData that has paused development, whereas the DataPipes API is already part of PyTorch core.

For my use-cases I don't need DataLoader2 or readingservice/adapter. I think there are other ways to solve the problems addressed by those additional APIs -- I think there are ways to do it with just DataPipes that would address the concerns @sehoffmann raises around shuffling/sharding being "tightly integrated into the torchdata core, and not easily accessible from user code."

I also wonder whether we actually need a separate API focused on composability of datasets. The original dataset API could be used with composition too, and I'm not sure exactly what challenges we'd face in doing so. I know we wouldn't have the functional helper functions but that seems minor, and not sure what else we'd be missing.

from data.

bryant1410 avatar bryant1410 commented on August 14, 2024

however we've seen some limitations in a few use-cases which indicate we may need to tweak them a bit

@laurencer can you elaborate on which are those use cases?

from data.

bhack avatar bhack commented on August 14, 2024

I am also interested about the torchdata and datloading vision/roadmap related to audio/video API fragmentation pytorch/pytorch#81102

from data.

bhack avatar bhack commented on August 14, 2024

I don't know if @laurencer is still active on this or he is working for pytorch/Meta more in general. /cc @ejguan as he is the codeowner of pytorch dataloader.

from data.

nairbv avatar nairbv commented on August 14, 2024

@bhack I know @ejguan isn't still on torchdata (I think he's in ads now)

from data.

bhack avatar bhack commented on August 14, 2024

@nairbv I think that the codeowners need to be updated https://github.com/pytorch/pytorch/blob/main/CODEOWNERS#L113-L114

from data.

nairbv avatar nairbv commented on August 14, 2024

Agreed. I don't believe dataloader currently has a specific owner but I'm not still at Meta.

from data.

bhack avatar bhack commented on August 14, 2024

I hope it will be one of the topics at https://events.linuxfoundation.org/pytorch-conference/

from data.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.