Code Monkey home page Code Monkey logo

Comments (13)

adamjstewart avatar adamjstewart commented on May 20, 2024 1

Preliminary results look very promising!
benchmark

from torchgeo.

calebrob6 avatar calebrob6 commented on May 20, 2024 1

I'd do the first matrix as quickly as possible because the results of that are going to be very informative. If that all works out then you can repeat the same with a vectordataset.

File format: GeoTIFF vs. HDF5, Shapefile vs. GeoJSON

I don't think this is important right now. I.e. we can just assume the data is in a good format (COG and shapefile/geopackage)

Warping strategy

In the above sketch you can repeat the experiments with the manually aligned versions of the dataset to test the "already in correct CRS/res" case. The first set of experiments is with "change CRS and res". It might be interesting to see if warping or resampling is more expensive, but not interesting for the paper I think.

Also, do we want to compare with different batch_sizes or different num_workers?

Sure! These experiments should be very quick to run once you have a script for them.

from torchgeo.

adamjstewart avatar adamjstewart commented on May 20, 2024

I think this will require a significant rework of our __getitem__ implementation. Right now, we warp and then merge/sample from a tile at the same time. If we want to benefit from the 2-step random tile/chip sampling strategy, we'll have to use an LRU cache on the entire tile after warping.

from torchgeo.

adamjstewart avatar adamjstewart commented on May 20, 2024

I think we can also consider the following I/O strategies:

  • load/warp entire file, no caching (worst case scenario)
  • load/warp entire file, caching (good default)
  • load/warp single window (does not allow for caching)

Merging should happen after the fact so that (tile 1, tile 2, tile 1 + 2) don't end up being 3 different entries in the cache.

I don't think we need to consider situations in which we:

  • warp the file on disk on-the-fly (violates principle of least surprise)
  • don't bother to warp (won't work if not already in correct CRS/res)
  • don't bother to merge (won't be correct near boundaries)
  • always return entire tiles instead of chips (not feasible for model/GPU memory)
  • load the entire dataset into memory beforehand (won't fit in RAM)

These strategies make sense for tile-based raster images, but are slightly more complicated for vector geometries or static regional maps. We may need to change the default behavior based on the dataset.

from torchgeo.

adamjstewart avatar adamjstewart commented on May 20, 2024

For timing, we should choose some arbitrary epoch size, then experiment with various batch sizes and see how long it takes to load an entire epoch.

from torchgeo.

adamjstewart avatar adamjstewart commented on May 20, 2024

Here's where I'm currently stuck to remind myself when I next pick this up:

Our process right now is:

  1. Open filehandles for raw data (rasterio.open)
  2. Open filehandles for warped VRTs (rasterio.vrt.WarpedVRT)
  3. Merge VRTs to get an array (rasterio.merge.merge)
  4. Return array as tensor

Steps 1 and 2 don't actually do anything and are almost instantaneous. It isn't until you actually try to read() the data that warping occurs, and read() is called in rasterio.merge.merge. If we want to cache this reading of warped data, we'll have to call vrt.read() ourselves. Since rasterio.merge.merge only accepts filenames or filehandles as input, we'll basically need to implement our own merge algorithm that takes 1+ cached numpy arrays, creates a new array with the correct dimensions, and indexes the old arrays to copy the data. The hard part here will be keeping track of coordinates, nodata values, and merging correctly. See https://github.com/mapbox/rasterio/blob/master/rasterio/merge.py for the source code, most of which we'll need to do as well.

from torchgeo.

adamjstewart avatar adamjstewart commented on May 20, 2024

Another hurdle: the size of each array depends greatly on the dataset, but most are around 0.5 GB per file. We can't really assume users have >8 GB of RAM, which greatly limits our LRU cache size. We could use something like psutil to query the system memory, and hard-code the avg file size for each dataset if we want to make things more flexible.

from torchgeo.

adamjstewart avatar adamjstewart commented on May 20, 2024

For now, I think we can rely on GDAL's internal caching behavior. When I read a VRT the second time around, it seems to be significantly faster. Still not as fast as reading the raw data or as indexing from a loaded array, but good enough for a first round of benchmarking. GDAL also lets you configure the cache size.

from torchgeo.

calebrob6 avatar calebrob6 commented on May 20, 2024

@adamjstewart, sketch of the full experiment:

  • Get Landsat scenes from several projections and CDL data.
    • Convert all to COG if they aren't already
  • Copy the files to blob storage, local SSD, local HDD
  • Create three GeoDataset instances, one for each dataset location
  • Record the number of patches per second you can read using each of the GeoDatasets with the different types of GeoSamplers
  • Record how long it takes to warp/reproject the Landsat scenes to align them to CDL (or vice-versa) "by hand" with gdalwarp.
    • Also record the size of the resulting files.
    • Also record the nasty gdalwarp command you actually have to figure out and execute to do this.
    • Note: We can use this to extrapolate how much preprocessing you would need to do before training with a traditional DL library.
    • Note: We can use this pre-aligned data with a custom dataloader to see how many patches/second you could sample if you did go in and do all the preprocessing. Hopefully this number is similar to what you get with torchgeo (or at least not much much larger).

from torchgeo.

adamjstewart avatar adamjstewart commented on May 20, 2024

@calebrob6 the above proposal covers the matrix of:

  • Data location: local SSD, local HDD, blob storage
  • Sampling strategy: RandomGeoSampler, RandomBatchGeoSampler, GridGeoSampler
  • I/O strategy: cached, not cached

There are a lot of additional constraints that we're currently skipping:

  • File format: GeoTIFF vs. HDF5, Shapefile vs. GeoJSON
  • Warping strategy: already in correct CRS/res, change CRS, change res, change CRS and res

Do you think it's fine to skip these for the sake of time? I doubt reviewers would straight up reject us for not including one of these permutations, and can always ask us to perform additional experiments if they want.

Also, we should definitely benchmark not only RasterDataset but also VectorDataset (maybe Sentinel + Canadian Building Footprints?). Should I purposefully change the resolution of one of these datasets? Should I purposefully switch to a CRS different than all files or keep the CRS of one of the files?

from torchgeo.

adamjstewart avatar adamjstewart commented on May 20, 2024

Also, do we want to compare with different batch_sizes or different num_workers?

from torchgeo.

calebrob6 avatar calebrob6 commented on May 20, 2024

Some things to discuss soon:

  • How to benchmark CDL/Landsat from blob containers?
  • How to compare patches/sec to something that people might understand (torchvision.datasets.ImageNet seems like a good idea)?
  • How to do the no warp/reproject case (e.g. to we warp/crop CDL to each of the landsat scenes s.t. the pixels align)?

from torchgeo.

adamjstewart avatar adamjstewart commented on May 20, 2024

We're following up on this discussion in #1330 (comment)

from torchgeo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.