Code Monkey home page Code Monkey logo

opensartoolkit's People

Contributors

12rambau avatar buddyvolly avatar d-chambers avatar jamesemwheeler avatar kbodolai avatar mjavorka avatar pedep avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opensartoolkit's Issues

scihub_catalogue() and read_inventory() use differents column names

Hi BuddyVolly
That's not a big deal, but I pass the results of scihub_catalog as a parameter of the search_refinement() function and it does not work because the names of the columns do not match.
If I use the Scene1 class, the read_inventory rename the columns, but I don't.

Anyway, I think that the columns should have the same name and I suggest the following:

--- a/ost/s1/search.py
+++ b/ost/s1/search.py
@@ -153,16 +153,16 @@ def _query_scihub(opener, query):
     # create empty GDF
     columns = [
         'identifier', 'polarisationmode', 'orbitdirection',
-        'acquisitiondate', 'relativeorbitnumber', 'orbitnumber',
-        'producttype', 'slicenumber', 'size', 'beginposition',
+        'acquisitiondate', 'relativeorbit', 'orbitnumber',
+        'product_type', 'slicenumber', 'size', 'beginposition',
         'endposition', 'lastrelativeorbitnumber', 'lastorbitnumber',
         'uuid', 'platformidentifier', 'missiondatatakeid',
         'swathidentifier', 'ingestiondate', 'sensoroperationalmode',
-        'footprint'
+        'geometry'
     ]
 
     crs = {'init': 'epsg:4326'}
-    geo_df = gpd.GeoDataFrame(columns=columns, crs=crs, geometry='footprint')
+    geo_df = gpd.GeoDataFrame(columns=columns, crs=crs)
 
     # we need this for the paging
     index, rows, next_page = 0, 99, 1
@@ -191,9 +191,7 @@ def _query_scihub(opener, query):
 
             acq_list = _read_xml(dom)
 
-            gdf = gpd.GeoDataFrame(
-                acq_list, columns=columns, crs=crs, geometry='footprint'
-            )
+            gdf = gpd.GeoDataFrame(acq_list, columns=columns, crs=crs)
 
             # append the gdf to the full gdf
             geo_df = geo_df.append(gdf)
@@ -335,9 +333,6 @@ def _to_postgis(gdf, db_connect, outtable):
 
     # construct the SQL INSERT line
     for _index, row in gdf.iterrows():
-
-        row['geometry'] = dumps(row['footprint'])
-        row.drop('footprint', inplace=True)
         identifier = row.identifier
         uuid = row.uuid
         line = tuple(row.tolist())

reduce the number of dependencies

looking at the dependency list, I realize that there are many duplications. These duplications can lead to major break when the environment is built with pip (the main objective for now).

Remeber that pip is sequential, it will build the 1 then 2 etc... without ever checking if the requirements of 2 matches the one from 1.

Quick example:
we import

"fiona",
"GDAL>=2",
"godale",
"geopandas>=0.8",
"pyproj>=2.1",
"jupyterlab",
"matplotlib",
"numpy",
"pandas",
"psycopg2-binary",
"rasterio",

when

numpy 
fiona
pyproj
numpy
pandas 

are already dependencies of geopandas. I'll try to reduce this list to the minimum

It is not possible to create a timeserie of Sigma0

My proposal

/ost/s1/burst_batch.py

-    dict_of_product_types = {'bs': 'Gamma0', 'coh': 'coh', 'pol': 'pol'}
+    list_of_product_types = {('bs', 'Gamma0'), ('bs', 'Sigma0'), ('coh', 'coh'), ('pol', 'pol')}

-        for pr, pol in itertools.product(dict_of_product_types.items(), pols):
+        for pr, pol in itertools.product(list_of_product_types, pols):

docker build fails

Dear ost-team,

I tried to pull the docker image. Anyway by using docker pull buddyvolly/opensartoolkit still an old ost-version is installed. now I fail building the new version using the image here in this repository. By using docker build I always get the following errer-log at step 8/14:
`docker build /home/ost/OpenSarToolkit_master_20201220/
Sending build context to Docker daemon 54.87MB
Step 1/14 : FROM ubuntu:18.04
---> 2c047404e52d
Step 2/14 : LABEL maintainer="Petr Sevcik, EOX"
---> Using cache
---> e76d4f005833
Step 3/14 : LABEL OpenSARToolkit='0.10.1'
---> Using cache
---> 3962f459ecd1
Step 4/14 : WORKDIR /home/ost
---> Using cache
---> 693506450e9e
Step 5/14 : COPY snap7.varfile $HOME
---> Using cache
---> a9f658bc3602
Step 6/14 : ENV OTB_VERSION="7.1.0" TBX_VERSION="7" TBX_SUBVERSION="0"
---> Using cache
---> 76a4ec29a32c
Step 7/14 : ENV TBX="esa-snap_sentinel_unix_${TBX_VERSION}_${TBX_SUBVERSION}.sh" SNAP_URL="http://step.esa.int/downloads/${TBX_VERSION}.${TBX_SUBVERSION}/installers" OTB=OTB-${OTB_VERSION}-Linux64.run HOME=/home/ost PATH=$PATH:/home/ost/programs/snap/bin:/home/ost/programs/OTB-${OTB_VERSION}-Linux64/bin
---> Using cache
---> c642dd2a2adc
Step 8/14 : RUN groupadd -r ost && useradd -r -g ost ost && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -yq python3 python3-pip git libgdal-dev python3-gdal libspatialindex-dev libgfortran3 wget unzip imagemagick nodejs npm
---> Running in d4c18ae031c0
Get:1 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]
Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages [1344 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [186 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [11.3 MB]
Get:8 http://archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages [13.5 kB]
Get:9 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [53.8 kB]
Get:10 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [266 kB]
Get:11 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2136 kB]
Get:12 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2244 kB]
Get:13 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [11.4 kB]
Get:14 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [11.3 kB]
Get:15 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [237 kB]
Get:16 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [1816 kB]
Get:17 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1372 kB]
Get:18 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [15.3 kB]
Fetched 21.5 MB in 2s (9791 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
libgdal-dev : Depends: default-libmysqlclient-dev but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
The command '/bin/sh -c groupadd -r ost && useradd -r -g ost ost && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -yq python3 python3-pip git libgdal-dev python3-gdal libspatialindex-dev libgfortran3 wget unzip imagemagick nodejs npm' returned a non-zero code: 100
`

may you have a hint what I'm doing wrong.
best M

apply linter on the lib

I didn't see any lint application applyed on the map. The current consensus for Python is to use a combination between:

  • flake8
  • black

What do you think ?

publish on Conda ?

the lib is currently only available on PiPy do you think it could be useful to have it on conda-forge as well ?

burst identification not working without dias access

got SLC data by s1_batch.download. trying to:
s1_batch.burst_inventory_(key=key, refine=True)

but get error:
Getting burst info from S1A_IW_SLC__1SDV_20180601T165947_20180601T170014_022166_0265B3_4292.zip.
Getting burst info from S1B_IW_SLC__1SDV_20180607T165904_20180607T165931_011270_014AF7_EE78.zip.
Dummy function for mundi paths to be added

TypeError Traceback (most recent call last)
in
----> 1 s1_batch.burst_inventory_(key=key, refine=True)
2 #s1_batch.plot_inventory(s1_batch.burst_inventory, transperancy=0.1)

~/miniconda3/lib/python3.7/site-packages/ost/Project.py in burst_inventory_(self, key, refine)
343 self.refined_inventory_dict[key],
344 download_dir=self.download_dir,
--> 345 data_mount=self.data_mount)
346 else:
347 self.burst_inventory = burst.burst_inventory(

~/miniconda3/lib/python3.7/site-packages/ost/s1/burst.py in burst_inventory(inventory_df, download_dir, data_mount, uname, pword)
49 inventory_df.identifier == scene_id].orbitdirection.values[0]
50
---> 51 filepath = scene.get_path(download_dir, data_mount)
52 # print(filepath)
53 if filepath[-4:] == '.zip':

~/miniconda3/lib/python3.7/site-packages/ost/s1/s1scene.py in get_path(self, download_dir, data_mount)
169 elif os.path.isdir(self._onda_path(data_mount)):
170 path = self._onda_path(data_mount)
--> 171 elif os.path.isfile(self._mundi_path(data_mount)):
172 path = self._mundi_path(data_mount)
173 elif os.path.isfile(self._aws_path(data_mount)):

~/miniconda3/lib/python3.7/genericpath.py in isfile(path)
28 """Test whether a path is a regular file"""
29 try:
---> 30 st = os.stat(path)
31 except OSError:
32 return False

TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType

complete the community check list ?

If you go to "insight" you'll see that GitHub provide some guideline about community management. As the project is starred by 137 people, it would make sense to comply with it.

  • Code of conduct
  • Description
  • README
  • Contributing
  • License
  • Issue templates
  • Pull request template
  • Repository admins accept content reports

problems in grd_to_ard

hi,
I report two issues here:
1)
OurProject.grd_to_ard(subset=None, timeseries=True, timescan=True, mosaic=True)
gives error:
in grd_to_ard(self, inventory_df, subset, timeseries, timescan, mosaic, overwrite)
682 self.ard_parameters['dem'] = 'ASTER 1sec GDEM'
683
--> 684 if subset.split('.')[-1] == '.shp':
685 subset = str(vec.shp_to_wkt(subset, buffer=0.1, envelope=True))
686 elif subset.startswith('POLYGON (('):

AttributeError: 'NoneType' object has no attribute 'split'

  1. so when I put "aoi" variable to the "subset", where aoi is wkt in the form of "POLYGON ((....", then another error appears:
    in grd_to_ard(self, inventory_df, subset, timeseries, timescan, mosaic, overwrite)
    708 self.ard_parameters,
    709 subset,
    --> 710 self.data_mount
    711 )
    712

/opt/miniconda/envs/shared/lib/python3.7/site-packages/ost/s1/grd_batch.py in grd_to_ard_batch(inventory_df, download_dir, processing_dir, temp_dir, ard_parameters, subset, data_mount)
153 border_noise=border_noise,
154 subset=subset,
--> 155 polarisation=polarisation)
156
157

/opt/miniconda/envs/shared/lib/python3.7/site-packages/ost/s1/grd_to_ard.py in grd_to_ard(filelist, output_dir, file_id, temp_dir, resolution, product_type, ls_mask_create, speckle_filter, dem, to_db, border_noise, subset, polarisation)
783
784 grd_import = opj(temp_dir, '{}_imported'.format(
--> 785 os.path.basename(file)[:-5]))
786 logfile = opj(output_dir, '{}.Import.errLog'.format(
787 os.path.basename(file)[:-5]))

/opt/miniconda/envs/shared/lib/python3.7/posixpath.py in basename(p)
144 def basename(p):
145 """Returns the final component of a pathname"""
--> 146 p = os.fspath(p)
147 sep = _get_sep(p)
148 i = p.rfind(sep) + 1

TypeError: expected str, bytes or os.PathLike object, not NoneType

note that I use parameter
OurProject.ard_parameters['polarisation'] = 'VV,VH'
but tried also to put:
OurProject.ard_parameters['polarisation'] = ['VV','VH']

no difference though

create_stack uses a wrong pattern

/ost/generic/common_wrappers.py
@@ -347,7 +347,7 @@ def create_stack(
         command = (
             f'{GPT_FILE} {graph} -x -q {2*cpus} '
             f'-Pfilelist={file_list} '
-            f'-PbandPattern=\'{pattern}.*\' '
+            f'-PbandPattern=\'.*{pattern}.*\' '
             f'-Poutput={out_stack}'
         )
 

javaImport Error with custom GPT, only when copied to .ost/gpt

It seems that executing gpt in some environments (here Ubuntu 18.04. and python venv 3.6) from the .ost/gpt folder won't recognize the java environment and fails at importing some core java libraries.

Basically commenting/deleting these lines solves the issue:

os.makedirs(opj(homedir, '.ost'), exist_ok=True)
shutil.copy(gptfile, opj(homedir, '.ost', 'gpt'))

Recommending using just the GPT installed via user. For isntance to get a GPT_Path from a specific env. variable like os.getenv('gpt_path')

Petr

slc2Ard__.py: slcImp2Coh broken

  • line_424 coregistration needs logfile
  • missing reference to the getGPT (line_164)
  • if to be replaced by burst processing, remove the whole file

WIP: (discussion/recomendation) Import ARD settings as .json

Preparation:

  • let's polish the current code first
  • collect ideas at the workshop end of February 2020

When to implement:

  • implementation quite easy
  • after Workshop

Some more polishing maybe?

  • No need to separate GRD and SLC classes, when product type is given, etc.
  • Some additional functions to be introduced, like move Coherence out of ARD processing etc.

UnboundLocalError: local variable 'calibrate_to' referenced before assignment

Hi!

When I try to run the batch routine, I face this error:

Script:
s1_grd.grds_to_ard(
inventory_df=s1_grd.refined_inventory_dict[key],
timeseries=True,
timescan=True,
mosaic=True,
overwrite=False
)

Error:

UnboundLocalError Traceback (most recent call last)
in
4 timescan=True,
5 mosaic=True,
----> 6 overwrite=False
7 )

/usr/local/lib/python3.6/dist-packages/ost/Project.py in grds_to_ard(self, inventory_df, subset, timeseries, timescan, mosaic, overwrite, exec_file, cut_to_aoi)
1121 subset,
1122 self.data_mount,
-> 1123 exec_file)
1124
1125 # reset number of already processed acquisitions

/usr/local/lib/python3.6/dist-packages/ost/s1/grd_batch.py in grd_to_ard_batch(inventory_df, download_dir, processing_dir, temp_dir, proc_file, subset, data_mount, exec_file)
139 temp_dir,
140 proc_file,
--> 141 subset=subset)
142
143

/usr/local/lib/python3.6/dist-packages/ost/s1/grd_to_ard.py in grd_to_ard(filelist, output_dir, file_id, temp_dir, proc_file, subset)
909 calibrated = opj(temp_dir, '{}_cal'.format(file_id))
910 logfile = opj(output_dir, '{}.Calibration.errLog'.format(file_id))
--> 911 return_code = common._calibration(infile, calibrated, logfile, calibrate_to)
912
913 # delete input

UnboundLocalError: local variable 'calibrate_to' referenced before assignment

Download from ONDA Dias

check_connection using https://catalogue.onda-dias.eu/dias-catalogue/ee89ad93-c6ab-4eb3-b698-4585b4af3ca0)/$value returns a 403 status (I assume that the product is too old and is not reachable anymore).

A workaround :

    if response.status_code == 403:
        return 200

add github actions

If the lib is released on Pipy, some github action/test should automatically run for PRs and push, I can help on these one

stop importing gdal

Based on gdal documentation and the fact that we request gdal>=2 in the setup.py we can safely drop the usage of:

import gdal
import ogr

The binding using osgeo module is available since gdal 1.7

from osgeo import gdal
from osgeo import ogr

Additionally, there are five compatibility modules that are included but provide notices to state that they are deprecated and will be going away. If you are using GDAL 1.7 bindings, you should update your imports to utilize the usage above, but the following will work until GDAL 3.1.

missing coma in polarization choice

I had issues using "VV,VH" polarization.
The two modifications solves the problem :

ost/Project.py
possible_pols = ['*', 'VV', 'VH', 'HV', 'HH', 'VV, VH', 'HH, HV']

ost/helpers/settings.py
['VV, VH, HH, HV', 'VV', 'VH', 'VV, VH', 'HH, HV', 'VV, HH']

how to get AOI ard result from overlapping scenes

Hi all,

I would like to create Sigma0 product based on OST, since my AOI area contains two different scene which are overlapping.
How can I get the AOI area Sigma0 product?

Here is an example shown from s1_grd.plot_inventory
image

I just want to get the mosaic result of two overlapping Sigma0 products and crop with the black rectangle?

Which function should I use and how can I find the produced result?

Thank you!

mosaic_timeseries with more than 9 time-steps fails

ost/s1/burst_batch.py

@@ -556,7 +556,7 @@ def mosaic_timeseries(burst_inventory, config_file):
-            list_of_files = list(temp_mosaic.glob(f'{i}*{product}.tif'))
+            list_of_files = list(temp_mosaic.glob(f'{i}.*{product}.tif'))

remove wiki fro github

The wiki page is empty if we consider creating a documentation on RDT (#54) the wiki tab could be removed, what do you think ?

GRD frame import exited with error -9

Hello,

I am trying to preprocess Sentinel1 Data with this Toolkit. At first, the Docker build process did not work with the provided Dockerfile. A fixed it by doing:

  1. Lbgfortran3 seems to be no longer available for Ubuntu 20, so I fixed it by using libgfortran5 instead.
  2. I had to change the Orfeo Toolbox link to https://www.orfeo-toolbox.org/packages/archives/OTB/${OTB}
  3. I deleted the installation of jupyterlab because this step failed because of nodejs version.

Is this fine so far or did I change anything that could break something? With these changes I could build and use the container. So now I am trying to preprocess the data like shown in the OST Tutorial Notebooks.

I set the parameters as:
s1_grd.ard_parameters['single_ARD']['resolution'] = 50
s1_grd.ard_parameters['single_ARD']['remove_speckle'] = True
s1_grd.ard_parameters['single_ARD']['remove_mt_speckle'] = True
s1_grd.ard_parameters['mosaic']['cut_to_aoi'] = True
s1_grd.ard_parameters['single_ARD']['dem']['dem_name'] = 'SRTM 1Sec HGT'

where s1_grd is an instance of Sentinel1Batch class. Then I call s1_grd.grds_to_ards with parameters: timeseries=True, timescan=False, mosaic=False, overwrite=False.

This works fine for some files, but for other I always get the following error: GRD frame import exited with error -9
It also says that there is a Snap error log file available called 20180130_15.Import.errLog, but the only thing logged in the files is:
INFO: org.esa.snap.core.gpf.operators.tooladapter.ToolAdapterIO: Initializing external tool adapters
INFO: org.esa.snap.core.util.EngineVersionCheckActivator: Please check regularly for new updates for the best SNAP experience.
INFO: org.hsqldb.persist.Logger: dataFileCache open start

Now I have no idea how to proceed or where to look further. Do you have any idea what this error could mean or maybe how to solve it? Or do you have any pointers for where I could potentially find out more about this? I have tried to solve this issue for a while now and I am running out of ideas. Any help would be highly appreciated. If you need me to provide any more information please let me know.

Best wishes,
Nico

What are the differences between different ARD types: CEOS, Earth-Engine, OST-GTC and OST-RTC?

Hi, I'm a little confused about these ARD types. By tracking the fucntion Sentinel1Scene.get_ard_parameters(), I known they have different default parameters generated by corresponding json templates. For example, the resolution is 10 of Earth-Engine and CEOS while it's 20 of OST-GTC and OST-RTC.

I wonder know what are the standards defining these ARD product. For example, If I set the ARD type to 'Earth-Engine', will the result of Sentinel1Scene.create_ard() be identical to the data downloaded from Google Earth Engine?

Another question is that what is the best parameter settings to create ARD for flood mapping? I read the Flood mapping tutorial provided by ESA. In this turorial, 5 preprocessing steps are recommend: (1)Orbit file application, (2)Image subset, (3)Radiometric image calibration, (4)Despeckle, (5)Terraion correction, which is different from the default steps of OST. According to default OST-GTC parameters, multi-look is executed but specke filtering and terrain flattening are not eexcuted. So is there a fixed pre-processing workflows which is always best for all GRDH data?

Basic usage documentation

It would be great to have some basic usage documentation. I have gone through the installation but not sure what commands are available.

thank contributor using an Author file

for multi-people project involving multiple organization a good practice is to use the all contributor guidelines (https://allcontributors.org). I'm working on their bot and it will be compatible with .rst format in no time (currently only supporting.md).

The table can be started manually anyway

scihub zip check

hi,
I got an error not allowing me to download data from scihub.
I used the following command:
OurProject.download(OurProject.refined_inventory_dict['ASCENDING_VVVH'], mirror=1, concurrent=1)
(where OurProject is an instance of ost.Sentinel1_GRDBatch)

The data are downloaded normally but some zip check considers it inconsistent. I think this zip check must be too strict J
(i just tried to download a file manually and it was downloaded correctly. so the server should be ok)

INFO: Checking the zip archive of /.../SAR/GRD/2019/07/24/S1B_IW_GRDH_1SDV_20190724T180118_20190724T180143_017279_0207FA_B270.zip for inconsistency
INFO: /.../download/SAR/GRD/2019/07/24/S1B_IW_GRDH_1SDV_20190724T180118_20190724T180143_017279_0207FA_B270.zip did not pass the zip test. Re-downloading the full scene.

are there any comparisons of the outputs of these processing steps with SNAP?

I'm curious if the outputs are comparable to processing Sentinel-1 scenes with gpt. I'm also curious as to the status of the repo. Is it ready to be used in replacement of SNAP for processing GRD scenes?

I'll be going through the notebooks, from this example it looks like all the processing steps I nee are supported, which is exciting! https://github.com/ESA-PhiLab/OST_Notebooks/blob/master/1%20-%20The%20first%20S1%20scene.ipynb

We are interested in a processing solution that can be easily dockerized and applied to process terabytes of GRD imagery time series.

refine_search crashes if mosaic_refine=False

refine_inventory.py:570 datelist is not defined.

I suggest the following:

+            datelist, inventory_refined_tmp = _forward_search(
+                aoi_gdf, inventory_refined, area_reduce
+            )
             if mosaic_refine is True:
-                datelist, inventory_refined = _forward_search(
-                    aoi_gdf, inventory_refined, area_reduce
-                )
                 inventory_refined = _backward_search(
-                    aoi_gdf, inventory_refined, datelist, area_reduce
+                    aoi_gdf, inventory_refined_tmp, datelist, area_reduce
                 )

ARD Types, Final Names (SLC extentions)

  • Discuss "final" ARD types at the coming workshop
  • OST currently 3 versions (make it 2 or change name for others)
  • SLC and GRD can have the same nomenclature for BS/TC products
  • Maybe an extra ard_params for Coherence/Polarimetric(Interferometric) Products
  • Similar goes for the corresponding 'product_type' names, define 'final' names if possible

create documentation

Would it be useful to have a RDT documentation ?
It could easily integrate:

  • API content (using sphinx autodoc)
  • Notebook example
  • Usage and installation instruction

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.