Code Monkey home page Code Monkey logo

deck.gl-raster's Introduction

deck.gl-raster

gzipped size NPM

deck.gl layers and WebGL modules for client-side satellite imagery processing on the GPU.

Landsat 8 Modified Soil Adjusted Vegetation Index over the Grand Canyon and Kaibab Plateau, with the cfastie colormap.

Overview

deck.gl is a great WebGL-accelerated geospatial rendering engine for the browser. deck.gl layers are designed to be composable and easy to extend. As such, small variations of the pre-built layers can do amazing new things, while not being fit for inclusion in the standard layer library.

This repository contains deck.gl layers and reusable WebGL modules for rendering and computation on rasters, especially satellite imagery.

Documentation Website

deck.gl-raster's People

Contributors

dependabot[bot] avatar kylebarron avatar zakjan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deck.gl-raster's Issues

Dusting off and potential unification with viv

I've been informally chatting with @manzt recently about a generic deck.gl layer for visualizing and operating on image data that can be used both for geospatial and microscopy (and more!) use cases.

I've been working on a new project at @developmentseed called lonboard that connects deck.gl to Python for geospatial applications, and Development Seed does a ton of client work with raster data, so I may have some time to invest in a potential refreshed raster visualization layer.

The purpose of this issue is to see if there's potential for collaboration with viv folks on low-level multi-channel raster/image visualization that can be generic enough for many use cases.

There are 3 main parts to the existing deck.gl-raster layout (this is reflected in the directory layout): data loading, deck.gl layers, and webgl modules to generate shader pipelines. These should each be able to be fully composable and generic across geo/microscopy use cases.

Data Loading

deck.gl-raster's primary existing data loader is the NPYLoader, which loads the numpy file format.

Other possible loaders would include (Cloud-Optimized) (Geo)TIFF and Zarr (GeoZarr and OME-Zarr). For tiled sources, a generic deck.gl TileLayer may be used to render multiple RasterLayers.

Deck.gl Layer

Currently, this project has two layers, the RasterLayer and the RasterMeshLayer. The former extends the BitmapLayer and the latter is a combination of the RasterLayer and the MeshLayer to be used for geospatial terrain rendering. For the discussion at hand, we can ignore the RasterMeshLayer and focus on the RasterLayer.

The design of the RasterLayer was to add three props to the BitmapLayer:

  • modules: an array of shader modules to be compiled into a rendering pipeline and used internally to the layer. Both images and moduleProps below are passed as parameters into modules.
  • images: an object (roughly Record<string, ImageBitmap | Texture2D>) containing images/textures. Originally this was separated out from the rest of the props to enable images to be loaded asynchronously (iirc).
  • moduleProps: an object (Record<string, any>) with non-image properties to be passed into the shader modules.

The fragment shader of the RasterLayer adds two shader injection points, DECKGL_CREATE_COLOR and DECKGL_MUTATE_COLOR. These are injected here. These allow an image to be instantiated and then one or more modules to mutate color values (it's possible these should be joined into a single injection point).

Then the list of shader modules is assembled into a string that is injected here.

Shader modules

A variety of geospatial algorithms have already been implemented as gpu modules, see the webgl/ folder. These are relatively straightforward; they take a four-channel image as input and return some sort of array as output.

One very important thing about shader modules is that the output dimension of one stage needs to match the input dimension of the next stage. If you upload an RGB image and then apply a colormap, it'll silently ignore the G and B channels of the original image. If you apply something like Normalized Difference (e.g. for NDVI) and then don't apply a colormap, I think you'll see ranges of red, where the gpu interprets only as the first channel? Having better validation for this would be very helpful.

Pieces of work

Update to latest deck.gl

I haven't touched this code for a few years and I have no idea if it works with the latest deck.gl.

Conversion to TypeScript

deck.gl 8.9 supports typescript and would probably make the code a lot more maintainable.

More stable raster pipelines

As I alluded to in the shader modules section above, it's easy to shoot yourself in the foot when creating a raster pipeline.

Ensure compatibility with non-geo views

Presumably viv uses non-geospatial views. I haven't tested the existing RasterLayer in a non-geospatial use case, but I don't remember anything off-hand that would break.

Open Questions

  • Can we ignore WebGL 1 entirely at this point?
  • How much will deck v9's move to WebGPU-compatible shaders change the content of these layers?

cc @manzt and I'll let you cc anyone else who should be included.

How to handle modules multiple times?

E.g. a module for linear rescaling might be desired to be used at multiple stages of a pipeline. But even if the modules array accepts a single module twice, setModuleProps only takes a single object, so you couldn't pass two different sets of props.

Mask layer by multipolygon

Hello.
Your project is awesome.
I hope you will made index (NDVI) picking.
But I need more one feature.
I would like to mask resulted or sourced layer by multipolygon.
Polygons represents my area of interest and I would like to hide not interested for me area.

Port rio-color operations

Gamma adjustment adjusts RGB values according to a power law, effectively brightening or darkening the midtones. It can be very effective in satellite imagery for reducing atmospheric haze in the blue and green bands.

Sigmoidal contrast adjustment can alter the contrast and brightness of an image in a way that matches human's non-linear visual perception. It works well to increase contrast without blowing out the very dark shadows or already-bright parts of the image.

Saturation can be thought of as the "colorfulness" of a pixel. Highly saturated colors are intense and almost cartoon-like, low saturation is more muted, closer to black and white. You can adjust saturation independently of brightness and hue but the data must be transformed into a different color space.

Saturation looks like a more difficult operation, plus it already exists in luma.gl's shadertools.

But gamma and sigmoidal don't look that hard to port.

COGLayer

COGLayer: A deck.gl layer to render a COG directly

Overview

A COG is a single GeoTIFF file that holds multiple resolutions of raster data. See https://cogeo.org for more info.

COGs are internally tiled. Thus they're a great fit for the TileLayer, which renders a tiled dataset. However the TileLayer can't be used directly, because it assumes a web-mercator tiling system. Even COGs that are in the web mercator projection can't be used directly, because the internal indexing doesn't match the global OSM indexing.

Hence there are three main steps I need to do to get this working:

  1. Parse COG metadata to construct a 2D TileMatrixSet
  2. Custom TileLayer that places an arbitrary matrix of tiles
  3. Reproject data in the BitmapLayer

Parse COG metadata to construct TileMatrixSet

Basically working code:

Code
var fs = require('fs')
var geotiff = require('./node_modules/geotiff/dist/geotiff.bundle')
var buf = fs.readFileSync('m_3411861_ne_11_060_20180723_20190208.tif')

var tif;
geotiff.fromArrayBuffer(buf.buffer).then(x => {
  tif = x
})
var image;
tif.getImage().then(x => {
  image = x
})


/**
 * Get the geotransform of the image at a specific overview level
 *
 * @param  {Object} tif GeoTiff object
 * @param  {Number} ovrLevel overview level
 * @return {Object}          [description]
 */
function getGeotransform(tif, ovrLevel = 0) {
  var ifds = tif.fileDirectories;
  var firstIfd = ifds[0][0];

  // Calculate overview for source image
  var gt = [
    firstIfd.ModelPixelScale[0],
    0.0,
    firstIfd.ModelTiepoint[3],
    0.0,
    -firstIfd.ModelPixelScale[1],
    firstIfd.ModelTiepoint[4],
  ]

  // Decimate the geotransform if an overview is requested
  if (ovrLevel > 0) {
    var bounds = getBounds(gt, firstIfd)
    var ifd = ifds[ovrLevel][0];
    // TODO implement the translation and scale
    // Maybe use math.gl?
    gt = affine.Affine.translation(bounds[0], bounds[3]) * affine.Affine.scale(
        (bounds[2] - bounds[0]) / ifd.ImageWidth.value,
        (bounds[1] - bounds[3]) / ifd.ImageHeight.value,
    )
  }
  return gt;
}

function getBounds(gt, firstIfd) {
  tlx = gt[2]
  tly = gt[5]
  brx = tlx + (gt[0] * firstIfd.ImageWidth)
  bry = tly + (gt[4] * firstIfd.ImageLength)
  return [tlx, bry, brx, tly];
}


function create_tile_matrix_set(tif) {
  var matrices = [];
  var ifds = tif.fileDirectories;
  var firstIfd = ifds[0]

  for (var i = 0; i < ifds.length; i++) {
    var ifd = ifds[i][0];
    var gt = getGeotransform(tif, i);

    var matrix = {
      identifier: ifds.length - i - 1,
      topLeftCorner: [gt[2], gt[5]],
      tileWidth: ifd.TileWidth,
      tileHeight: ifd.TileLength,
      matrixWidth: Math.ceil(ifd.ImageWidth / ifd.TileWidth),
      matrixHeight: Math.ceil(ifd.ImageLength / ifd.TileLength),
      scaleDenominator: gt[0] / 0.28e-3,
    }
    matrices.push(matrix);
  }

  // Reverse order of array so that identifier 0 (lowest resolution) is first
  matrices.reverse();

  // Is this stable?
  var geoKeys = firstIfd[1];
  var epsg = geoKeys.ProjectedCSTypeGeoKey || geoKeys.GeographicTypeGeoKey

  var tms = {
    title: "Tile matrix for GeoTIFF",
    identifier: 'identifier or str(uuid.uuid4())',
    supportedCRS: "http://www.opengis.net" + `/def/crs/EPSG/0/${epsg}`,
    tileMatrix: matrices
  }

  return tms;
}

Note: you should be able to use math.gl where affine is used. If you look into the affine code, the source for the transformation matrices is pretty simple.

Custom TileLayer that supports TMS

While this could conceivably be included in deck.gl upstream, for now it would be better to work on it locally. The TMS provides a mapping from x, y, z values to locations in space. Note these are arbitrary locations, and should not be confused with the x, y, z indexing in the OSM system.

There are a few parts to this:

  • Determining reprojection function
  • Finding visibility

Determining reprojection function

First you need to find the reprojection string. So if the EPSG code in the geotiff is EPSG:26911, send a request to

https://epsg.io/26911.proj4

that returns

+proj=utm +zone=11 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs

Then pass that to proj4.

Finding visibility

Deck.gl uses frustum culling to determine the visibility of tiles. Hence I need to modify the tile-2d-traversal.js script to work with an arbitrary TMS.

I'll need to overwrite getBoundingVolume to find the tile's bounding box in the local CRS, then reproject it. Note that getBoundingVolume deals in deck.gl's common space, this is a web mercator projection of world size 512, with the origin in the top left.

Note that one caveat to TMS is that child tiles don't have strict nesting. Hence I believe it’s not always true that for any tile t, it has only one parent. This makes visibility computations harder, as finding a tile's children is less straightforward. With OSM indexing, given any tile t, there are always exactly four children tiles. With non-strict children, you essentially need to make a pass over all tiles at that level, checking if the tile's bounding box intersects with the parent.

Therefore you'll also have to overwrite the children method to determine the tile's children.

Since Xiaoji is such a good engineer, you might have to overwrite only those two methods!

Reproject data in renderSubLayers

This is explored a bit in #37. Basically you'll need to reproject the vertices within createMesh from the source CRS to one deck can understand.

Note that the mesh already is created at a certain resolution that's derived from the zoom level.

Note that this will only be possible with the RasterLayer, not the RasterMeshLayer, as the mesh layer needs to combine elevation data, and that would only work if you could fetch elevation data in the same local coordinate system.

Renderbuffer for convolutions?

Can you do a two-step process, where you first load the textures into a framebuffer/renderbuffer, and then the second step renders from there? Would that help with convolutions where you need data outside the tile?

Use of categorical datasets and sampling integer values

Any advice on how to retrieve integer values from an image tile as opposed to RGB float values? I can’t figure out any loader settings to do this correctly, or maybe what i'm trying to do isn't possible.

Use case is I’m dealing with categorical raster datasets (in this case landcover), which use integer values and I’m writing a custom module to calculate the difference / pixel movement between two images, and also a module to reclassify an image. So I’d like to be able to retrieve the integer values from the red and green channels of the resulting vec4 image after running combineBands on the two images.

Thanks

RasterLayer reprojection

Note: this original "RFC" is only for a single, simple GeoTIFF. A multi-resolution image like a COG is much harder because you have to handle visibility of internal tiles to figure out what to load.

There's a continuum of options between reprojecting every pixel and reprojecting only the corners.

  1. Load geotiff
  2. Find EPSG code from GeoTIFF metadata, see here, otherwise return. From geotiffjs/geotiff.js#176:

[When the source image is not in WGS84], the problem is more tricky, as you first need to transform the image coordinates to the native projection and then reproject to WGS84. This should be doable with proj4js when you have the according projection information. If you are lucky they are simply set in ProjectedCSTypeGeoKey or GeographicTypeGeoKey and you can get the projections WKT representation from epsg.io. See here how it was done in the COG-Explorer.

Therefore, to reproject an arbitrary pixel, given source_crs, dest_crs, pixel_x, pixel_y, image_height, image_width (watch out for flipped-y coordinates in images), you should be able to do something like the pseudocode:

src_crs_x = (pixel_x / width) * (bbox_max_x - bbox_min_x) + bbox_min_x
ditto for y, except for flipped coordinate

proj4(from, to, [src_crs_x, src_crs_y])

Then the next question is how many points to reproject. You can reproject only the corners, but for non-linear transformations, the inner pixels won't be in exactly the right place.

Alternatively you can reproject every image pixel, so that every pixel is in exactly the right position. To do this, you'd modify the existing mesh-creation code. (Since 8.2 when the GlobeView was added, the BitmapLayer creates a mesh and uses the GL.TRIANGLES draw mode.) The simplest way to do this then is to make a uniform mesh where every pixel gets two triangles. Then use those triangles as the inner mesh state.

References:

API review

You really need an api review for consistency:

  • Case consistency: camelCase everywhere?

Examples broken

Hey Kyle, Harley from Pachama! Seems like our time together was too early and we're only just looping around to COG integration now (we had a huge piece of work getting our datasets COG ready).

I'm just starting to look at how we can use this awesome package and I noticed the examples were broken. It seems there's a CORS issue (or possibly missing data issue) that has broken them:

Screen Shot 2021-08-24 at 10 04 32 am

More flexible channels, types

Right now I allow a maximum of 4 channels (bands) at a time in memory. At some point a user may wish to have 5 bands in a single algorithm.

A possible solution is to optionally store/allocate two image textures. I.e. right now I pass one vec4 image through the pipeline, but in the future you could pass two: vec4 image; vec4 image2.

Since I believe WebGL supports overloading, most algorithms could have code like:

vec4 sigmoidalContrast(vec4 arr, float contrast, float bias) {
...
}

vec4[2] sigmoidalContrast(vec4 arr1, vec4 arr2, float contrast, float bias) {
	arr1 = sigmoidalContrast(arr1, contrast, bias);
	arr2 = sigmoidalContrast(arr2, contrast, bias);
	return arr1, arr2;
}

(not sure if it's possible/how to return multiple textures from a function).

Bands expression layer?

Not sure if it's possible to dynamically define an expression to run on the shader. I suppose you could inject shader code, but then the user would have to make sure that the shader code is valid.

"Folder" imports?

E.g.

import * as WebGLModules from '@kylebarron/deck.gl-raster/webgl';
import * as DeckGLModules from '@kylebarron/deck.gl-raster/deckgl';

Research how these "folder imports" work. Are they still tree-shakeable?

Compute slope

Handle 260px input

In order to compute slope, you use the surrounding 8 pixels. Mapbox GL JS assembles this by backfilling from neighboring tiles. This is annoying and tricky, so instead I'll just handle exporting 260px/516px tiles from dem-mosaic-tiler

https://github.com/mapbox/mapbox-gl-js/blob/cfeff87b2f16fdd8480d7b760d9c19f4accb7485/src/shaders/hillshade_prepare.vertex.glsl#L9-L15

void main() {
    gl_Position = u_matrix * vec4(a_pos, 0, 1);

    highp vec2 epsilon = 1.0 / u_dimension;
    float scale = (u_dimension.x - 2.0) / u_dimension.x;
    v_pos = (a_texture_pos / 8192.0) * scale + epsilon;
}

Algorithm

See kylebarron/serverless-slope#6 for a working pure-numpy implementation. See also the Mapbox GL JS hillshading shader code.

  • Latitude correction
  • Don't forget you need the pixel size (meters per pixel). In the linked issue I got that from rasterio, but you need to compute it based solely on the zoom level. Look to Mapbox's code.

Mipmaps?

Mipmaps don't work for non-power-of-2 textures I think. So

  • Do you need mipmaps for rendering? Not sure exactly what mipmaps do... just make it faster to render?
  • If so, it might be ideal to pass the 260x260px texture to the initial framebuffer pass, and then copy (blit?) the inner 256x256px to the next texture for rendering. Ref #20.

Explore using Framebuffers

I think an initial pass to render to a framebuffer might be more performant for spectral indices, or especially complex things like Landsat level 3 products.

I think that the existing one-pass architecture re-renders every frame, so it must re-compute the spectral index every frame. For something like level 3 products, that's almost certainly too compute-intensive.

It would seem like framebuffers could make this performant, because the index only needs to be computed once. The input data is static and not changing. Then pass the index to the render pass, which only needs the index + colormap, which can be substituted on the fly. Just applying the colormap to the index should be really fast to do on the fly and while panning.

Update module props without needing a texture reload

Currently in landsat8.earth, I require a texture reload on every prop change:

    updateTriggers: {
      // Need to expand landsatBands array since comparison is shallow
      getTileData: [
        landsatMosaicId,
        naipMosaicId,
        landsatColormapName,
        landsatBandCombination,
        ...landsatBands,
      ],
    },

For these props (except for BandCombination), it's ok because the actual image texture inputs are changing. But now that I'm supporting more algorithms, things like a filter or sigmoidal contrast parameter changing should not need a texture reload.

Slow performance due to successive ProgramManager.addShaderHook calls

I recently noticed some poor performance, especially when panning (and thus creating new RasterLayers, which I ended up being able to debug down to

ProgramManager.getDefaultProgramManager(gl).addShaderHook(
"fs:DECKGL_MUTATE_COLOR(inout vec4 image, in vec2 coord)"
);
ProgramManager.getDefaultProgramManager(gl).addShaderHook(
"fs:DECKGL_CREATE_COLOR(inout vec4 image, in vec2 coord)"
);

While _getModel is run only once per layer, essentially on initialization, the default ProgramManager is shared by all deck.gl layers. This means that each new layer that's rendered as a sublayer of the TileLayer adds new hooks.

Therefore the shader hooks on the program manager end up increasing more and more:

image

It appears there isn't a general solution in luma.gl to only add once globally, so I'll just check in the programManager._hookFunctions.

Raster diff with deck.gl-raster

Hi - i'm trying to understand possible use cases of this library and this in particular is one that i'm looking for client side solutions for. Let's say I have two simple single band rasters A and B both as a tiled source (currently in mapbox), could the library render a difference of B minus A to show the pixel-by-pixel change between the two?

Layer Picking

Trying to pick from rendered layers gives color: null and picked: false.

Support 16bit bands

There are a couple possibilities for supporting 16 bit raster bands.

  1. Raw uint16 bytes as a binary blob

  2. 16-bit PNG grayscale.

    Canvas methods don't support 16 bits, so you'd need to use a custom PNG parser, like UPNG.js. Additionally, the PNG spec defines multiple bytes to be in Big-endian order, while WebGL requires input data in system endianness, which is usually little-endian. So you'd have to copy the data into a new TypedArray, switching to little-endian.

    Pseudocode for switching to little-endian:

    const buffer = read image	
    const view = new DataView(buffer)
    const littleEndian = new UInt16Array(buffer.length / 2)
    for (var i = 0; i < buffer.length, i += 2) {
      const offset = i * 2;
      const value = view.getUInt16(offset)
      littleEndian[i] = value
    }
  3. Two 8-bit bands of an RGB PNG

    Have to combine bands on the GPU. StackOverflow.

Additionally, I couldn't get an array of UInt16s to work in WebGL. I think you need to switch some texture parameters. See 1 2. Some options might not work in WebGL 1.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.