Code Monkey home page Code Monkey logo

Comments (3)

NicolasHug avatar NicolasHug commented on June 16, 2024 1

Thanks for the report @rbavery

Are there any current efforts in this direction I should be aware of?

No, we haven't been looking at the RPN's support for torch.compile yet.

I think this will at least involve changing the AnchorGenerator

Just note that a lot of this code is public and changing the behaviour e.g. the expected intput/output would technically be breaking backward compatibility. So adding support for AOT while still preserving BC may not be a trivial task.

Beyond the RPN, what model specifically are you interested in tracing?

from vision.

rbavery avatar rbavery commented on June 16, 2024

Got it, I initially went with supporting TorchScript scripting since it seemed easier and would only require adding type annotations. I've made edits to this model which uses a SWIN Transformer backbone, an FPN, and a Faster RCNN head:

https://github.com/allenai/satlas/blob/main/configs/satlas_explorer_marine_infrastructure.txt
https://github.com/allenai/satlas/blob/main/satlas/model/model.py

So far I addressed torchscript scripting issues with type annotations in the Satlas model source.

the first issue I hit was with torchvision is here:

RuntimeError: 
Module 'GeneralizedRCNNTransform' has no attribute 'image_mean' (This attribute exists on the Python module, but we failed to convert Python type: 'list' to a TorchScript type. List trace inputs must have elements. Its type was inferred; try adding a type annotation for the attribute.):
  File "[/opt/conda/lib/python3.10/site-packages/torchvision/models/detection/transform.py", line 167](http://127.0.0.1:8888/opt/conda/lib/python3.10/site-packages/torchvision/models/detection/transform.py#line=166)
            )
        dtype, device = image.dtype, image.device
        mean = torch.as_tensor(self.image_mean, dtype=dtype, device=device)
                               ~~~~~~~~~~~~~~~ <--- HERE
        std = torch.as_tensor(self.image_std, dtype=dtype, device=device)
        return (image - mean[:, None, None]) [/](http://127.0.0.1:8888/) std[:, None, None]
'GeneralizedRCNNTransform.normalize' is being compiled since it was called from 'GeneralizedRCNNTransform.forward'
  File "[/opt/conda/lib/python3.10/site-packages/torchvision/models/detection/transform.py", line 141](http://127.0.0.1:8888/opt/conda/lib/python3.10/site-packages/torchvision/models/detection/transform.py#line=140)
            if image.dim() != 3:
                raise ValueError(f"images is expected to be a list of 3d tensors of shape [C, H, W], got {image.shape}")
            image = self.normalize(image)
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
            image, target_index = self.resize(image, target_index)
            images[i] = image

and I've had some trouble addressing this with typing, since the class attribute is already typed, I'm not sure how to enable Torschript scripting to understand this attribute can be either a List[float] or None. I might need to make code modifications. I'll try to do so in a way that preserves backwards compat and leaves passing test and PR if it is helpful.

from vision.

rbavery avatar rbavery commented on June 16, 2024

I was able to get torch scripting to work by refactoring the Satlas source code, mostly by adding typing, removing control flow in some spots, and replacing the use of complex python data structures containing tensors with plain tensors. inference on dynamic batches appears to work without error. No changes to torchvision were needed.

but not AOTInductor unfortunately. I made some progress forking torchvision and trying to remove the use of ImageList and other python data structures, remove control flow (often making very hard assumptions about the input data), replace python indexing with torch.narrow, etc. But I still ran into unbacked symint issues when the NMS step is applied in the RPN, which I wasn't sure how to get around the fact that NMS is data-dependent and can't be made un-data dependent. If it' shelpful, I tried to document how I made progress here pytorch/pytorch#121036

Both methods for exporting were relatively painful. I'm hoping that AOTInductor comes up with a solution for handling data-dependent shapes, or making the process to write code that handles data-dependent shapes easier. I realize it is early days for AOTInductor still, but documentation would go a long way. I'd be happy to contribute but still feel fairly new to the process of handing data dependent shapes.

from vision.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.