Code Monkey home page Code Monkey logo

Comments (7)

vincentsarago avatar vincentsarago commented on June 28, 2024

thanks @Blockchainforcommons, I didn't realized the Sentinel-1 data needed more processing 🤦

DO you think the pre-processing should be done by default by rio-tiler-pds? I don't have time to go through the whole PDF but if you can provide

  • the additional files (within the productinfo) we need
  • the equations we need to apply

I could add this in the sentinel-1 reader

Basicaly we need to

  1. fetch the files ~here https://github.com/cogeotiff/rio-tiler-pds/blob/master/rio_tiler_pds/sentinel/aws/sentinel1.py#L60
  2. if it's a simple calculation we can add a post_process callback like https://github.com/cogeotiff/rio-tiler-pds/blob/master/rio_tiler_pds/landsat/aws/landsat8.py#L73

from rio-tiler-pds.

Blockchainforcommons avatar Blockchainforcommons commented on June 28, 2024

Hi Vincent, yes a lot of pre-processing is required with SAR like readers, however the specifics depends on the use case. Another calibration is wanted when i use the output for crop yield estimation, detect forest fires or find out how ice caps are melting.

Adding all these options is possible, but not really what is the goal of rio-tiler i think? It it is a great tool to extract data from a part of the scene, and without the azimuth and pixelcount data, the amplitude array we are now getting is much less valuable.

For example, if i want to calculate the VV in decibel, which the standard way of representing SAR data, i need to perform:

10 x log((A ** 2) / ((coefficient) ** 2))

where A is the amplitude value we fetch now.

  • the coefficient can be sigmaNought, betaNought, gammaNought (depending on usecase) and can be find in the annotation/calibration xml file:

ie:

aws s3 cp s3://sentinel-s1-l1c/GRD/2020/10/28/IW/DV/S1A_IW_GRDH_1SDV_20201028T003207_20201028T003232_034989_0414CB_EE56/annotation/calibration/calibration-iw-vh.xml . --request-payer requester

in there we find a list of calibrationVectors, and we lookup our value based on the Azimuth Line and the pixelcount of our location.

The azimuth line and pixelcount is just the X,Y pixel value of the tile. So if the tiff has 10.000 x 10.000 pixels, and the extracted point is at pixel 3500 x 4000. then the azimuth line is 3500 and the pixelcount is 4000. in the annotation file, you then lookup the closest line/count to the extracted point.

If you want rio-tiler to also return the data in Decibel, you can fetch the annotation files, find the coefficient values by pixel, and perform the calculations. depending on the option (sigma/gamma/beta) a user requires.

Now this is just calibration, there is then also speckle filter, terrain modifier, and more. But i think calibration is the most needed.
Also, you can see sentinel-hub delivering just this:

See under Units:
https://docs.sentinel-hub.com/api/latest/data/sentinel-1-grd/

You do have to make this only available for GRD scenes, as SLC require other steps beforehand.

To summarize,

getting the Azimuth and Pixelcount are standard and very important to use Sentinel-1 data. If you want to give the user more features, then the 'howevers and buts' expand fast.

Let me know what i can do,

Glad to help you out,

Carst

from rio-tiler-pds.

vincentsarago avatar vincentsarago commented on June 28, 2024

@Blockchainforcommons thanks, that's really useful! Getting the info for tile might be a bit complicated because we reproject the data and we might not be able to know which pixel/line of the original file we are using 🤔

If we find a way to retrieve the correct values, we might be able to use the new ImageData class (introduced in rio-tiler==2.0.0rc1` to hold those values and then let the user do what ever needed.

from rio_tiler_pds.sentinel.aws import S1L1CReader

with S1L1CReader(...) as scene:
    # fetch `productinfo`
    # fetch `*.xml`
    # maybe construct an`image like` data form the XML values
    
    # in addition to the `data`, the `S1L1CReader` will add (within the ImageData class) the values from the xml
    data = scene.tile(...)
    assert data.data
    assert data.gamma  # should be the same shape as data
    
    # we can use new ImageData method to apply operation on the data array
    img = data.apply_calibration("sigmaNought")  # available: sigmaNought , betaNought, gammaNought

🤷‍♂️, @Blockchainforcommons do you know any python libs that work with sentinel-1 GRD we could 👀 (or import) ?

Now this is just calibration, there is then also speckle filter, terrain modifier, and more. But i think calibration is the most needed.

Those might be a bit complex to implement, do you know any python libs that are able to do this (specifically the terrai modifier).

from rio-tiler-pds.

Blockchainforcommons avatar Blockchainforcommons commented on June 28, 2024

Getting the info for tile might be a bit complicated because we reproject the data and we might not be able to know which pixel/line of the original file we are using

Is that also the case for point?

If so, we can get the bbox and the image size of the scene and estimate the pixel location.

Those might be a bit complex to implement, do you know any python libs that are able to do this (specifically the terrai modifier).

Yes, take a look at Snappy (make sure you have the good one) and PyroSAR, like this:

https://forum.step.esa.int/t/using-snappy-for-sentinel1-grd-preprocessing/19684

not sure if the SNAP software has to be installed before using Snappy.

from rio-tiler-pds.

Blockchainforcommons avatar Blockchainforcommons commented on June 28, 2024

@vincentsarago any update, e.g. about the pixel/line retrieval from the original file?

from rio-tiler-pds.

vincentsarago avatar vincentsarago commented on June 28, 2024

Well right now I don't have time to work on this, I'm sorry! I may have time to review PR if you want to work on it!

from rio-tiler-pds.

Blockchainforcommons avatar Blockchainforcommons commented on June 28, 2024

@vincentsarago no problem! It got me thinking. From the metadata, we can get the image size, and there are also these, which strike me as the corner in latlng.

first_far_lat 13.488847732543945 float64 deg
first_far_long 78.43860631113749 float64 deg
last_near_lat 11.533498764038086 float64 deg
last_near_long 80.4279174213898 float64 deg
last_far_lat 11.980189323425293 float64 deg
last_far_long 78.14514937886506 float64 deg

Since we know the latlng of the tile/point, we can get the azimuth/count from that. Am i right?

from rio-tiler-pds.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.