Code Monkey home page Code Monkey logo

force's Introduction

FORCE

Framework for Operational Radiometric Correction for Environmental monitoring

Version 3.7.12

FORCE Logo

About

FORCE is an all-in-one processing engine for medium-resolution Earth Observation image archives. FORCE uses the data cube concept to mass-generate Analysis Ready Data, and enables large area + time series applications. With FORCE, you can perform all essential tasks in a typical Earth Observation Analysis workflow, i.e. going from data to information.

FORCE natively supports the integrated processing and analysis of

  • Landsat 4/5 TM,
  • Landsat 7 ETM+,
  • Landsat 8 OLI,
  • Landsat 9 OLI, and
  • Sentinel-2 A/B MSI.

Non-native data sources can also be processed, e.g. Sentinel-1 SAR data or environmental variables.

Related Links

The documentation is available on ReadtheDocs.

Learn how to use FORCE. Have a look at the Tutorials. Check regularly for new content.

Get help, and help others in the FORCE discussion section

Follow the FORCE project at ResearchGate

Stay updated, and follow me on Twitter

You are using FORCE? Spread the word, and use the #FORCE_EO hashtag in your tweets!

force's People

Contributors

antguz avatar danschef avatar davidfrantz avatar ernstste avatar florian-katerndahl avatar franzschug avatar geo-masc avatar jakimowb avatar janzandr avatar lehmann-fabian avatar pabrod avatar thielfab avatar vincentschut avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

force's Issues

SSL Error when trying to download data with force-level1-csd

Hello,

the force-leve1-csd tool fails to download some requested data from a server due to the server not having valid SSL certificates.

[2021-10-08 12:13:59,233] {pod_launcher.py:136} INFO - ERROR 1: server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
[2021-10-08 12:13:59,234] {pod_launcher.py:136} INFO - ERROR 1: Error returned by server : server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none (60)
[2021-10-08 12:13:59,235] {pod_launcher.py:136} INFO - FAILURE:
[2021-10-08 12:13:59,236] {pod_launcher.py:136} INFO - Unable to open datasource `WFS:http://ows.geo.hu-berlin.de/cgi-bin/qgis_mapserv.fcgi?MAP=/owsprojects/grids.qgs&SERVICE=WFS&REQUEST=GetCapabilities&typename=landsat&bbox=' with the following drivers.
[2021-10-08 12:13:59,236] {pod_launcher.py:136} INFO -   -> `PCIDSK'
.
.
.

Command that causes it (omitting daterange and path arguments as they seem to be irrelevant):

force-level1-csd -s LT04,LT05,LE07,S2A -d {daterange} -c 0,70 {image_metadata_folderpath} {image_folderpath} {queue_filepath} {aoi_filepath}

FORCE ver: 3.6.5

The server seems to be owned by Humbold University of Berlin. Who would be the right person to contact for SSL certificate renewal here?

Thanks!

force-higher-level OUTPUT_EXPLODE

Problem

Running force-higher-level on level2 data with

INDEX = EVI
OUTPUT_TSS = TRUE 1
OUTPUT_EXPLODE = TRUE

creates for each observation data an EVI image like 2010-2020_001-365_HL_TSA_LNDLG_EVI_TSS_20190825.tif.

A problem occurs ifmultiples level2 observations exists for the same date, e.g.
20190825_LEVEL2_SEN2A_BOA and 20190825_LEVEL2_LND08_BOA.
In this case force-higher-level writed one output image for 20190825 only.
Instead, using the parameterization with OUTPUT_EXPLODE = FALSE will generate an EVI stack with same number of observations as expected (but wrong Sensor metadata, see #25 )

I assume that images for the same date are simply overwritten, i.e.
2010-2020_001-365_HL_TSA_LNDLG_EVI_TSS_20190825.tif is written twice.

Expected behavior / solution
OUTPUT_EXPLODE = True should create the same number of files as band would be created with OUTPUT_EXPLODE = False.

Probably it's just required to add the sensor names (SEN2A, LND08) to the file name, as known form the level 2 outputs:

2010-2020_001-365_HL_TSA_LNDLG_EVI_TSS_20190825_SEN2A.tif
2010-2020_001-365_HL_TSA_LNDLG_EVI_TSS_20190825_LND08.tif

Parameterization

  1. download and extract example data from #25
  2. edit bugstamp/level3_tsa.prm and set OUTPUT_EXPLODE=True
  3. run force-higher-level level3_tsa.prm

Setup
Tested with FORCE v. 3.2.1 on BB8 (/develop/force-higher-level)

Error on WFS while selecting/downloading

I experienced difficulties while selecting and dowloading satellite images Sentinel 2A,2B. Description : No response from WFS. You find the appropriate details in the error message below.

Searching for footprints / tiles intersecting with geometries of AOI shapefile...
ERROR 1: server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
ERROR 1: Error returned by server : server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none (60)
FAILURE:
Unable to open datasource WFS:http://ows.geo.hu-berlin.de/cgi-bin/qgis_mapserv.fcgi?MAP=/owsprojects/grids.qgs&SERVICE=WFS&REQUEST=GetCapabilities&typename=sentinel2&bbox=' with the following drivers. -> PCIDSK'
-> netCDF' -> JP2OpenJPEG'
-> PDF' -> ESRI Shapefile'
-> MapInfo File' -> UK .NTF'
-> OGR_SDTS' -> S57'
-> DGN' -> OGR_VRT'
-> REC' -> Memory'
-> BNA' -> CSV'
-> NAS' -> GML'
-> GPX' -> LIBKML'
-> KML' -> GeoJSON'
-> Interlis 1' -> Interlis 2'
-> OGR_GMT' -> GPKG'
-> SQLite' -> OGR_DODS'
-> ODBC' -> WAsP'
-> PGeo' -> MSSQLSpatial'
-> OGR_OGDI' -> PostgreSQL'
-> MySQL' -> OpenFileGDB'
-> XPlane' -> DXF'
-> CAD' -> Geoconcept'
-> GeoRSS' -> GPSTrackMaker'
-> VFK' -> PGDUMP'
-> OSM' -> GPSBabel'
-> SUA' -> OpenAir'
-> OGR_PDS' -> WFS'
-> SOSI' -> HTF'
-> AeronavFAA' -> Geomedia'
-> EDIGEO' -> GFT'
-> SVG' -> CouchDB'
-> Cloudant' -> Idrisi'
-> ARCGEN' -> SEGUKOOA'
-> SEGY' -> XLS'
-> ODS' -> XLSX'
-> ElasticSearch' -> Walk'
-> Carto' -> AmigoCloud'
-> SXF' -> Selafin'
-> JML' -> PLSCENES'
-> CSW' -> VDV'
-> GMLAS' -> TIGER'
-> AVCBin' -> AVCE00'
-> `HTTP'

error in higher level TSA folding statistic

Hi David,

I am deriving higher level, quarterly folded TSA results for different statistics using Landsat data. The region of interest is small and intersects with just four Landsat tiles.

For AVG, KRT, MAX, MIN, RNG, SKW and STD it is working. However, for IQR, Q25, Q50, Q75 it appears that I get a memory related error.

force-higher-level LEVEL3_TSA... .prm

results in the following error message:

*** Error in `force-higher-level': free(): invalid next size (fast): 0x00007fc670184e30 ***
Abgebrochen (Speicherabzug geschrieben)

I am using FORCE 3.4.0 complied on Ubuntu 16.04 (virutal machine) with six cores and 16 GB RAM.

Christoph

[FR] force-train parameter file: FILE_FEATURES and FILE_RESPONSE CSV file handling

Problem

The force-train parameter file specifies the CSV files paths to FILE_FEATURES and FILE_RESPONSE that contain the features and labels/responses required to train a machine learning model.

As described in the parameter file, FILE_FEATURES has to be a:

# File that is holding the features for training (and probably validation).
# The file needs to be a table with features in columns, and samples in rows.
# Column delimiter is whitespace. The same number of features must be given
# for each sample. Do not include a header. The samples need to match the
# response file.
# Type: full file path

(FILE_RESPONSE accordingly)

Overall, these are quite some strict restrictions to the CSV. CSV files without a header line are hard to understand, in particular with increasing number of features/columns. It requires to save the "header" information somewhere else to document which inputs have been used for the model training.

It think it is also uncommon that features and responses need to be separated into two CSV files. This complicates maintaining the set of reference data, i.e. features and corresponding labels, e.g. in a table-sheet software like Excel / LibreOffice.

Recommendations
I recommend the following improvements:

  1. Allow for header lines in FILE_FEATURES and FILE_RESPONSE.
  • both files still need to have the same number of lines, i.e. if existent, a header line needs to exist in both files
  • a header line exists, if the first line in FILE_FEATURES starts with an alphabetic character, e.g. matches the regular expression ^\s*[A-Za-z]
  1. Allow features and responses to be defined in the same CSV
  • this could be the case if FILE_FEATURES and FILE_RESPONSE are the same
  • in this case, the 1st column should always contain the response
  1. Allow to specify the CSV column delimiter
  • introduce a parameter CSV_DELIMITER that can be used to specify CSV delimiter different to white-space

[develop branch] CSO error "doube free or corruption (out)"

Bug Description

I like to calculate the annual number of clear observations using /develop/force-higher-level level2_cso.prm (prm file attached).
Everything is fine when using a MONTH_STEP < 12. In case of MONTH_STEP = 12 the COS calculation is aborted and it looks like that only some of the image lines are written:

(This affects the current develop version, not the last master)

/develop/force-higher-level /data/Jakku/diss_bj/level2_cso.prm
number of processing units: 50
 (active tiles: 1, chunks per tile: 50)
________________________________________
Progress:                          0.00%
Time for I/C/O:           100%/000%/000%
ETA:                   not available yet
________________________________________
                   input compute  output
Processing unit:       1       0      -1
Tile X-ID:            49      49      -1
Tile Y-ID:            27      27      -1
Chunk ID:              1       0      -1
Threads:               8      22       4
Time (sec):            2       0       0
double free or corruption (out)
[1]    33302 abort (core dumped)  /develop/force-higher-level /data/Jakku/diss_bj/level2_cso.prm

Parameterization

DATE_RANGE = 2015-01-01 2020-12-31
DOY_RANGE = 1 365
CSO = NUM

details see
level2_cso.zip

Setup

"Ubuntu 18.04.4 LTS
BB8, 120 cores, RAM 754 GB

/develop/force-higher-level data/Jakku/diss_bj/level2_cso.prm

force-level2: Snow cover exceeding threshold of MAX_CLOUD_COVER_FRAME = 0 product(s) written

Describe the bug
In short:
If the snow cover of a scene exceeds the threshold given via MAX_CLOUD_COVER_FRAME, no products will be written even though processing still happens.

More information/background:
I stumbled upon this bug purely by chance, because I happened to check the level-2 output of a very snowy scene and noticed that it was not what I expected. In the following you can see the level-1 scenes that cover my AOI (the state of Thuringia). You can see some wispy clouds in the lower left, but other than that it's snow.

Screenshot from 2021-04-01 11-09-18

Here is the level-2 output. The blue grid is the GLANCE7 grid that I used for processing. I'm also using a DEM in the shape of my AOI, which afaik shouldn't cause any (unrelated) problems... right?

Screenshot from 2021-04-01 11-09-35

In my parameter file I used 75 as a threshold for both MAX_CLOUD_COVER_FRAME and MAX_CLOUD_COVER_TILE.
Here is the log output where you can see that cloud cover is always well below threshold, but curiously the only scenes that produced an output are the ones with a snow cover below the given threshold! ๐Ÿ•ต๐Ÿป ๐Ÿ”Ž

/level1/sentinel2/T32UNA/S2A_MSIL1C_20170127T102301_N0204_R065_T32UNA_20170127T102258.SAFE: dc:   2.54%. wc:   0.11%. sc:  85.01%. cc:  21.30%. AOD: 0.2516. # of targets: 110/187.  0 product(s) written. Success! Processing time: 13 mins 31 secs
/level1/sentinel2/T32UNB/S2A_MSIL1C_20170127T102301_N0204_R065_T32UNB_20170127T102258.SAFE: dc:  32.64%. wc:   0.21%. sc:  77.08%. cc:  14.03%. AOD: 0.2225. # of targets: 654/2880.  0 product(s) written. Success! Processing time: 20 mins 02 secs
/level1/sentinel2/T32UNC/S2A_MSIL1C_20170127T102301_N0204_R065_T32UNC_20170127T102258.SAFE: dc:   8.05%. wc:   0.15%. sc:  85.21%. cc:  12.53%. AOD: 0.2424. # of targets: 93/303.  0 product(s) written. Success! Processing time: 10 mins 14 secs
/level1/sentinel2/T32UPA/S2A_MSIL1C_20170127T102301_N0204_R065_T32UPA_20170127T102258.SAFE: dc:  16.51%. wc:   0.61%. sc:  80.58%. cc:  11.57%. AOD: 0.1977. # of targets: 1478/2600.  0 product(s) written. Success! Processing time: 18 mins 38 secs
/level1/sentinel2/T32UPB/S2A_MSIL1C_20170127T102301_N0204_R065_T32UPB_20170127T102258.SAFE: dc:  85.96%. wc:   0.47%. sc:  66.74%. cc:  12.40%. AOD: 0.2098. # of targets: 3571/7550.  2 product(s) written. Success! Processing time: 34 mins 17 secs
/level1/sentinel2/T32UPC/S2A_MSIL1C_20170127T102301_N0204_R065_T32UPC_20170127T102258.SAFE: dc:  10.40%. wc:   0.43%. sc:  63.78%. cc:  16.04%. AOD: 0.2358. # of targets: 208/492.  1 product(s) written. Success! Processing time: 15 mins 35 secs
/level1/sentinel2/T32UQA/S2A_MSIL1C_20170127T102301_N0204_R065_T32UQA_20170127T102258.SAFE: dc:   0.88%. wc:   0.68%. sc:  91.33%. cc:   2.01%. AOD: 0.1588. # of targets: 168/196.  0 product(s) written. Success! Processing time: 12 mins 38 secs
/level1/sentinel2/T32UQB/S2A_MSIL1C_20170127T102301_N0204_R065_T32UQB_20170127T102258.SAFE: dc:  17.07%. wc:   0.39%. sc:  91.47%. cc:  15.61%. AOD: 0.2171. # of targets: 1082/1377.  0 product(s) written. Success! Processing time: 16 mins 25 secs
/level1/sentinel2/T33UUS/S2A_MSIL1C_20170127T102301_N0204_R065_T33UUS_20170127T102258.SAFE: dc:   7.09%. wc:   0.33%. sc:  94.56%. cc:  27.64%. AOD: 0.2657. # of targets: 118/189.  0 product(s) written. Success! Processing time: 13 mins 28 secs

I processed the same scenes with the same setup and parameter settings, except changing MAX_CLOUD_COVER_FRAME and MAX_CLOUD_COVER_TILE to 100. Lo and behold, all scenes produced an output ๐ŸŽ‰

/level1/sentinel2/T32UNA/S2A_MSIL1C_20170127T102301_N0204_R065_T32UNA_20170127T102258.SAFE: dc:   2.54%. wc:   0.11%. sc:  85.01%. cc:  21.30%. AOD: 0.2516. # of targets: 110/187.  1 product(s) written. Success! Processing time: 14 mins 17 secs
/level1/sentinel2/T32UNB/S2A_MSIL1C_20170127T102301_N0204_R065_T32UNB_20170127T102258.SAFE: dc:  32.64%. wc:   0.21%. sc:  77.08%. cc:  14.03%. AOD: 0.2225. # of targets: 654/2880.  3 product(s) written. Success! Processing time: 22 mins 00 secs
/level1/sentinel2/T32UNC/S2A_MSIL1C_20170127T102301_N0204_R065_T32UNC_20170127T102258.SAFE: dc:   8.05%. wc:   0.15%. sc:  85.21%. cc:  12.53%. AOD: 0.2424. # of targets: 93/303.  1 product(s) written. Success! Processing time: 10 mins 21 secs
/level1/sentinel2/T32UPA/S2A_MSIL1C_20170127T102301_N0204_R065_T32UPA_20170127T102258.SAFE: dc:  16.51%. wc:   0.61%. sc:  80.58%. cc:  11.57%. AOD: 0.1977. # of targets: 1478/2600.  1 product(s) written. Success! Processing time: 20 mins 03 secs
/level1/sentinel2/T32UPB/S2A_MSIL1C_20170127T102301_N0204_R065_T32UPB_20170127T102258.SAFE: dc:  85.96%. wc:   0.47%. sc:  66.74%. cc:  12.40%. AOD: 0.2098. # of targets: 3571/7550.  2 product(s) written. Success! Processing time: 34 mins 06 secs
/level1/sentinel2/T32UPC/S2A_MSIL1C_20170127T102301_N0204_R065_T32UPC_20170127T102258.SAFE: dc:  10.40%. wc:   0.43%. sc:  63.78%. cc:  16.04%. AOD: 0.2358. # of targets: 208/492.  1 product(s) written. Success! Processing time: 15 mins 50 secs
/level1/sentinel2/T32UQA/S2A_MSIL1C_20170127T102301_N0204_R065_T32UQA_20170127T102258.SAFE: dc:   0.88%. wc:   0.68%. sc:  91.33%. cc:   2.01%. AOD: 0.1588. # of targets: 168/196.  1 product(s) written. Success! Processing time: 13 mins 12 secs
/level1/sentinel2/T32UQB/S2A_MSIL1C_20170127T102301_N0204_R065_T32UQB_20170127T102258.SAFE: dc:  17.07%. wc:   0.39%. sc:  91.47%. cc:  15.61%. AOD: 0.2171. # of targets: 1082/1377.  2 product(s) written. Success! Processing time: 18 mins 13 secs
/level1/sentinel2/T33UUS/S2A_MSIL1C_20170127T102301_N0204_R065_T33UUS_20170127T102258.SAFE: dc:   7.09%. wc:   0.33%. sc:  94.56%. cc:  27.64%. AOD: 0.2657. # of targets: 118/189.  1 product(s) written. Success! Processing time: 13 mins 11 secs

Screenshot from 2021-04-01 14-57-06

Parameterization
FORCE_params__20210331T105111.txt

Setup

  • FORCE version
    I'm using v3.6.5 in a Singularity container, which was converted from the official Docker version.
    I'd be very surprised if this caused the problem. But just to be sure, maybe someone else can verify with another installation that this problem exists?

  • RAM and number of CPUs on your machine
    HPC with enough RAM and CPU


Happy Easter holidays!

force-cube output name

force-cube can be used to rasterize vector data into binary masks (https://davidfrantz.github.io/tutorials/force-masks/masks/). The written masks will have the same BASE_MASK name like the input vector file, just replacing the file extension with *.tif. E.g.

force-cube vienna.shp /data/Dagobah/edc/misc/mask rasterize 10

creates:

/data/Dagobah/edc/misc/mask/X0077_Y0058/vienna.tif
/data/Dagobah/edc/misc/mask/X0077_Y0059/vienna.tif
/data/Dagobah/edc/misc/mask/X0078_Y0058/vienna.tif
/data/Dagobah/edc/misc/mask/X0078_Y0059/vienna.tif

If would be nice if force-cube would have an optionally BASE_MASK argument, e.g.

force-cube vienna.shp /data/Dagobah/edc/misc/mask rasterize 10 vienna10m.tif

This would allow to :

  • store different masks with different resolutions below the same mask directory
  • to shorten or rename potentially long vector file names

Restricting github-Workflows in forked repos which include pushing an image to DockerHub

Hey David,
the two new Workflows "docker-publish-debug.yml" and "docker-debug.yml" result in every forked version trying to upload FORCE to Dockerhub. I thought this isn't the desired action/what should happen and added two conditionals to the workflows, so they only get executed in your repo.
An alternative would be to disable the workflows manually, should this be the desired approach to stop said workflows I will refrain from opening a pr.

Cheers,
Florian

Cannot find .laads file within Docker container

When force-lut-modis is run within a Docker container, it errors with "couldn't retrieve user..", because user is unknown in the Docker container. In particular, line 51-52 in modwvp-ll.c gives the error:
if (getlogin_r(user, NPOW_10) != 0){ printf("couldn't retrieve user..\n"); exit(1);}
I circumvented this problem for now by removing this check and putting the LAADS App key in the Docker working directory /app/.laads instead of /home/user/.laads, but a more generic solution is desirable.

Nodata value of CSO products set to 1

The nodata value of CSO products is set to 1 in FORCE v3 (according to gdalinfo and QGIS).

Expected behavior would be a value outside the range that is to be expected as output, i.e. a negative number.

Force 3 running on Ubuntu Server 18.04.4 LTS

force-lut-modis crashing with gdal 3.2.0

When force is compiled against gdal 3.2.0, force-lut-modis crashes. When force is compiled with gdal 3.1.3, it works ok.

For building force, we use the osgeo/gdal ubuntu-full- docker images. When we build force in the osgeo/gdal ubuntu-full-3.2.0 image, force-lut-modis crashes:

create a dummy coordinate file:

echo "-92.3904 14.4691" > /wvp/wvp_20191017.coo

run force-lut-modis:

force-lut-modis /wvp/wvp_20191017.coo /wvp /wvp /wvp 2019 10 17 2019 10 17

output (in this case the modis data was already downloaded):

2019/10/17. do. TERRA geometa exists.
278 lines, 127 valid.
127 valid lines in geometa
AQUA geometa exists.
288 lines, 133 valid.
133 valid lines in geometa
requested box: UL -93.14/15.22, LR -91.64/13.72
1 intersecting granules
ERROR 4: : No such file or directory
ERROR 10: Pointer 'hDataset' is NULL in 'GDALGetRasterXSize'.

ERROR 10: Pointer 'hDataset' is NULL in 'GDALGetRasterYSize'.

ERROR 10: Pointer 'hDS' is NULL in 'GDALGetRasterBand'.

ERROR 10: Pointer 'hBand' is NULL in 'GDALRasterIO'.

could not read image. 100.0% of useful frames.
free(): invalid pointer
Aborted (core dumped)

Expected behavior
The hdf data can be read and converted, no errors happen, and no core dump.

Setup
FORCE version: tested with 3.5.2 and 3.6.1, both have the bug
ubuntu 20.04 (using these dockers: https://hub.docker.com/r/osgeo/gdal/tags?page=1&ordering=last_updated)

libgomp: Thread creation failed

Reported by @jakimowb via email.

The Level 2 ImproPhe submodule in force-higher level occassionally throws this error:

________________________________________
Progress:                         26.00%
Time for I/C/O:           007%/092%/001%
ETA:             00y 00m 02d 21h 28m 59s
________________________________________
                   input compute  output
Processing unit:      27      26      25
Tile X-ID:            49      49      49
Tile Y-ID:            27      27      27
Chunk ID:             27      26      25
Threads:               8      22       4
Time (sec):          224    2747      33

libgomp: Thread creation failed: Resource temporarily unavailable
double free or corruption (!prev)
[1]    1148 abort (core dumped)  force-higher-level level2imp.prm.workaround/level2imp.prm

Add the option to process all files in a directory in force-cube

Hi there!

As far as I know, force-cube can only process single files.
It would be great to have the option to run force-cube over all files (e.g. Sentinel-1 GeoTIFF files) in a directory. In the meantime I'll just write my own script to do that, but to have the option with the force-cube-program probably would be helpful for others as well!

Cheers, Marco

Missing cloud shadow flag for non-buffered clouds

I tried to use the FORCE QAI product, and create a cloud mask without the less confident clouds. However, I noticed that the cloud shadow flag is always 0 for this cloud state, which results on missing cloud shadow when using the unbuffered clouds. This was also visible when looking at the output created with force-qai-inflate.

cannot create directory errors

force-level2 complains about directories that already exist in the DIR_TEMP, e.g from a previous run.
dos that mean the respective image will not be processed at all`

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
ETA: 19757s Left: 1580 AVG: 13.15s  local:45/26/100%/14.6s mkdir: cannot create dire                                                                                                                                                         ctory โ€˜/data/Alderaan/temp_bj/LC08_L1TP_226065_20180916_20180928_01_T1โ€™: File exists
ETA: 10544s Left: 1362 AVG: 7.75s  local:117/244/100%/7.9s mkdir: cannot create dire                                                                                                                                                         ctory โ€˜/data/Alderaan/temp_bj/LC08_L1TP_226067_20180714_20180730_01_T1โ€™: File exists
ETA: 10430s Left: 1358 AVG: 7.69s  local:116/248/100%/7.8s mkdir: cannot create dire                                                                                                                                                         ctory โ€˜/data/Alderaan/temp_bj/LC08_L1TP_226067_20180916_20180928_01_T1โ€™: File exists
ETA: 10279s Left: 1349 AVG: 7.63s  local:117/257/100%/7.8s mkdir: cannot create dire                                                                                                                                                         ctory โ€˜/data/Alderaan/temp_bj/LC08_L1TP_226067_20190514_20190521_01_T1โ€™: File exists
ETA: 8642s Left: 1244 AVG: 6.95s  local:118/362/100%/7.1s

force-level1-csd on CIFS-mounted network drive

force-level1-csd throws an error and exits when working on a CIFS-mounted network drive:

force-level1-csd -s LT04,LT05,LE07 -c 0,70 download/metadata download/data download/queue.txt crete.shpSearching for footprints / tiles intersecting with geometries of AOI shapefile...
ERROR 1: sqlite3_exec(COMMIT) failed: database is locked
ERROR 1: sqlite3_exec(PRAGMA synchronous = OFF) failed: Safety level may not be changed inside a transaction
ERROR 1: sqlite3_exec(BEGIN) failed: cannot start a transaction within a transaction
ERROR 1: Transaction not established
ERROR 1: A file system object called 'merged.gpkg' already exists.
ERROR 1: GPKG driver failed to create merged.gpkg

Apparently, geopackage doesn't work on network drives in Unix....
https://gis.stackexchange.com/questions/327214/is-there-a-fix-for-sqlite3-execommit-failed-database-is-locked-qgis-3-6-pyt

When doing the same on a local directory or an ext4-mounted drive, it works without a problem.

coredump in TSA submodule when using CAT or TRD without FLD

This bug was reported by Matt Clark in the Google Group.

The core dump appears using the TSA submodule in FORCE HLPS if

OUTPUT_FB* = FALSE AND OUTPUT_CA* = TRUE

This applies to any folding period (years, quarters, months, weeks, days).

I will develop a bug fix for this and ship with FORCE v. 3.7, which will be released soon.

If you work with earlier FORCE versions, set OUTPUT_FB* = TRUE to mitigate this issue.

Best,
David

Tiny bug in force-parameter

Bug

The command:

force-parameter data/param/ LEVEL2 0 

adds empty spaces at the end of the names of some fields. Particularly: DO_ADD, DIR_AOD, MAX_CLOUD_COVER_TILE, CLOUD_BUFFER, SNOW_BUFFER and CLOUD_THRESHOLD.

For instance, it writes:

DO_AOD  = TRUE # <- this
DO_AOD = TRUE  # <- instead of this

Most parsers will likely ignore this, and it didn't really cause me any serious problem. But it kept me puzzled for a while when trying to compare two different parameter files generated by different means ๐Ÿ˜….

Setup

  • Force v. 3.6.5
  • Running on Ubuntu 18.04.5 LTS
  • on my own laptop
  • where force was installed as usual

lance-modis expiring all app keys in favor of using tokens

Considering this message I received a few days ago from Lance-modis:

Dear LANCE-MODIS Users:

Starting 6/1/2021 we will be expiring all app keys in favor of using tokens. If you authenticate via the โ€œAuthorization: Bearer โ€ HTTPS header and the app key you provide is formatted as XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX then the app key will stop working starting 6/1/2021. To prevent disruption in your access to data please generate a new token. You can find instructions to generate a new token at https://ladsweb.modaps.eosdis.nasa.gov/tools-and-services/data-download-scripts/#tokens

Regards,
lance-modis Users Support

I think force-lut-modis will break because that is still using the deprecated "app key" instead of the newer "token".

Processing Sentinel-1-GRD data

Hi,

I was just wondering how to use the higher level features in "force" with Sentinel-1-GRD data as I can't find anything in the docs.

I have already applied a standard preprocessing workflow to the Sentinel-1-GRD data using the snappy framework and now have a time series of both ascending and descending geotiff files that cover the entire island of Ireland.

I would like to process these files further using the force framework to calculate both Spectral Temporal and Texture Metrics. Is this possible to do using the "force" framework ?.

Can I simply use force-level2 with all of the processes in the parameter file set to NULL ? or is further processing required to get the data into a format were it can be cubed by "force"?

Your help would be much appreciated,
Eoghan.

gdal-bin conflicting python-gdal with current releases on apt

Dear David,
I am currently installing the newest version of FORCE 3.5.2. And read that from 3.5.0 it supports gdal 3.

First I wanted to install it on a VM, which is the usage our researchers are used to.
While installing the dependencies as stated in the docs there is an error:

sudo apt-get install libgdal-dev gdal-bin python-gdal

The error:

Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 gdal-bin : Breaks: python-gdal (< 2.4.3~) but 2.2.3+dfsg-2 is to be installed
 python-gdal : Depends: gdal-abi-2-2-3
               Depends: libgdal20 (>= 2.2.2) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

You mention that you are testing with gdal 2.2.2 this is not available anymore currently on apt (just to let you know):

apt-cache policy gdal-bin
gdal-bin:
  Installed: (none)
  Candidate: 3.0.4+dfsg-1~bionic0
  Version table:
     3.0.4+dfsg-1~bionic0 500
        500 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 Packages
     2.2.3+dfsg-2 500
        500 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages

 apt-cache policy libgdal-dev
libgdal-dev:
  Installed: (none)
  Candidate: 3.0.4+dfsg-1~bionic0
  Version table:
     3.0.4+dfsg-1~bionic0 500
        500 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 Packages
     2.2.3+dfsg-2 500
        500 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages

 apt-cache policy python-gdal
python-gdal:
  Installed: (none)
  Candidate: 2.2.3+dfsg-2
  Version table:
     2.2.3+dfsg-2 500
        500 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages

Then I tried the dockerized version as stated in the docks. It works. I have looked into the dockerimage and tried to replicate it on a plain ubuntu 18.04 VM. The same error occurs as stated above.

# Install libraries
RUN apt-get -y install wget unzip curl git build-essential libgdal-dev gdal-bin python-gdal \ 
  libarmadillo-dev libfltk1.3-dev libgsl0-dev lockfile-progs rename \
  parallel libfltk1.3-dev apt-utils cmake \
  libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev \
  python3.8 python3-pip

I wonder how or if this installation works in the docker?
I will go on with the installation now without python-gdal. Maybe it's not a hard dependency.
Best,
Peter

Incomplete feature output when sampling .tif-stack

Describe the bug
When using ++PARAM_SMP_START++ on stacked .tif ( INPUT_FEATURE ~> 500 Bands), an incomplete number of features is returned in output .csv.

Expected behavior
A returned .csv with features of all bands would be expected. As a workaround, a break up of bands in <200 per INPUT_FEATURE gives expected output.
smp_Landsat.zip

Inconclusive error message resulting from typo in RFC param file

FORCE 3.2.1 higher-level (RFC classification) throws following error message if the file listed under FILE_MODEL does not exist:

terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.1.0) /home/[I_removed_this_part]/src/opencv-4.1.0/modules/core/include/opencv2/core.hpp:3133: error: (-215:Assertion failed) fs.isOpened() in function 'load'

Aborted (core dumped)

This is rather cosmetics and I was lucky enough to catch my mistake within 2 minutes. I am surprised though. Does FORCE not check whether DIR_MODEL+FILE_MODEL exist? It does throw a specific error message if DIR_MODEL does not exist.

Vertical striping in landsat boa image after running force-l2ps

Describe the bug
After running force-l2ps, vertical striping is apparent in the resulting boa image. The striping is not there in the original landsat band files, neither is it in the dem. So it is probably FORCE that is adding it. The striping gets stronger when the adjacency correction is enabled, but is also there without adjacency correction.

2 landsat scene id's where this is happening:

  • LC81260582019090LGN00
  • LE71260582019114EDC00

screenshot of LC81260582019090LGN00 (bands 5/4/3) boa image:
LC81260582019090LGN00

Expected behavior
After running force-l2ps, I expect to get a boa image without vertical striping artifacts.

Parameterization
Command: force-l2ps /dprof_data/tmp/scenes/LC81260582019090LGN00 /dprof_data/tmp/scenes/LC81260582019090LGN00/force.prm
force.prm (renamed to force.prm.txt otherwise github won't let me attach it)

Setup

  • FORCE version: 3.5.2
  • OS and version: Ubuntu 20.04.1 LTS
  • GDAL version: 3.1.3
  • running in docker (derived from osgeo/gdal:ubuntu-full-3.1.3)
  • did you deviate from the installation instruction? -> no
  • 64G, intel core i9 (8 cores, 16 hyperthreading threads)

Possibility of using only the part "BRDF"

Dear David,
I would need to apply reflectance normalization from Sentinel-2 (A/B) when acquisitions for the same granule are derived from different relative orbits.
Would there be the possibility of using only a part of the entire package (force)?
Thanks in advance for your reply

Conflict in outlier removal (QAI SCREENING) when using dense time series (>= 3 obs. / day)

Problem
Using the ABOVE_NOISE and BELOW_NOISE parameters (section QAI SCREENING) for TSA processing together with dense time series may cause a floating point exception. This happens if for any individual pixel there are >= 3 observations for a single date, which may only apply if multi-annual time series are (artificially) densified into a single year.

Not using the filtering, i.e. setting both parameters to zero is a temporary solution to the problem.

ETA improvements

Long-running force algorithms provide an estimated time of arrival (ETA), which helps very much to organize and schedule processing time with other users.

A typical force-level2 output, for example, is like:

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
ETA: 106005s Left: 3582 AVG: 29.59s local:25/3847/100%/29.6s

meaning the processing is expected to be done in 106005 seconds. It would be nice to see this ETA in date-time-stamp ISO format as well, e.g. '2020-04-04T13:06:25'

Problem with GDAL >= 3.0

There currently is an incompatibility of FORCE with GDAL >= 3.0.

Since GDAL >= 3.0, the coordinates from coordinate transformation operations are no longer sorted as X/Y or LON/LAT, but in the typical order of each coordinate system (Link).

  x = utm_x; // easting
  y = utm_y; // northing

  transf = OGRCreateCoordinateTransformation(&SRS_UTM, &SRS_WGS);

  transf->Transform(1, &x, &y);

In GDAL < 3.0, this code snippet transforms x to longitude, and y to latitude.
Since GDAL 3.0, this code snippet transforms x to latitude, and y to longitude.

Until I have fixed this, it is not recommended to use FORCE with GDAL >= 3.0!

level1-csd: Duplicate Landsat scenes downloaded

Google seems to keep duplicate Collection 1 Level 1 products on GCS. Products only differing in processing timestamp have been observed in several instances.
Example of two products acquired on the same day on the same path/row:

LC08_L1TP_193026_20170805_20180523_01_T1
LC08_L1TP_193026_20170805_20170812_01_T1

Sentinel-2 data is checked for duplicates, but finding duplicates in Landsat Collection data was not expected. Therefore, force-level1-csd does not account for duplicates at the moment, leading to an overhead in download volume.

Unable to read parameter from MTL file (Landsat 7)

I have an issue when trying to do generate Level-2 for Landsat7 data.

Right after starting the process, I receive the following error:

error in fname. Unable to read parameter from MTL file!

I tried also in debug mode with force-l2ps which produced the following log:

check that all meta is initialized, stack as well?
there are still some things to do int meta. checking etc

Mission: 0
Start of RSR array: 14
DN: LE07_L1TP_189027_20190823_20190919_01_T1_B1.TIF
 LMAX/LMIN 293.70/-6.20, QMAX/QMIN 255.00/1.00, R*/R+ 0.00186/-0.01, K1/K2 -32767.00/-32767.00
DN: LE07_L1TP_189027_20190823_20190919_01_T1_B2.TIF
 LMAX/LMIN 300.90/-6.40, QMAX/QMIN 255.00/1.00, R*/R+ 0.00209/-0.01, K1/K2 -32767.00/-32767.00
DN: LE07_L1TP_189027_20190823_20190919_01_T1_B3.TIF
 LMAX/LMIN 234.40/-5.00, QMAX/QMIN 255.00/1.00, R*/R+ 0.00199/-0.01, K1/K2 -32767.00/-32767.00
DN: LE07_L1TP_189027_20190823_20190919_01_T1_B4.TIF
 LMAX/LMIN 241.10/-5.10, QMAX/QMIN 255.00/1.00, R*/R+ 0.00291/-0.02, K1/K2 -32767.00/-32767.00
DN: LE07_L1TP_189027_20190823_20190919_01_T1_B5.TIF
 LMAX/LMIN 47.57/-1.00, QMAX/QMIN 255.00/1.00, R*/R+ 0.00277/-0.02, K1/K2 -32767.00/-32767.00
DN: LE07_L1TP_189027_20190823_20190919_01_T1_B7.TIF
 LMAX/LMIN 16.54/-0.35, QMAX/QMIN 255.00/1.00, R*/R+ 0.00263/-0.02, K1/K2 -32767.00/-32767.00
DN: NULL
 LMAX/LMIN -32767.00/-32767.00, QMAX/QMIN -32767.00/-32767.00, R*/R+ -32767.00000/-32767.00, K1/K2 666.09/1282.71
dtype: 8
sat: 255
refsys   = 189027
tier: 1

stack info for FORCE Digital Number stack - DN_ - SID -1
open: 0, format 1
datatype 5 with 0 bytes
filename: ./tmp/DIGITAL-NUMBERS.tif
nx: 8171, ny: 7181, nc: 58675951, res: 30.000, nb: 7
width: 245130.0, height: 215430.0
chunking: nx: 0, ny: 0, nc: 0, width: 0.0, height: 0.0, #: 0
active chunk: -1, tile X0000_Y0000
ulx: 537585.000, uly: 5367015.000
proj: PROJCS["WGS 84 / UTM zone 33N",GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]],PROJECTION["Transverse_Mercator"],PARAMETER["latitude_of_origin",0],PARAMETER["central_meridian",15],PARAMETER["scale_factor",0.9996],PARAMETER["false_easting",500000],PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH],AUTHORITY["EPSG","32633"]]
par: DIR_LEVEL2: ./level2, DIR_TEMP: ./tmp, DIR_LOG: ./log, FILE_QUEUE: ./poolfile.pool, DIR_WVPLUT: NULL, DIR_AOD: NULL, FILE_TILE: NULL, FILE_DEM: ./dem/dem.tif, DIR_MASTER: NULL, TILE_SIZE: 30000.000000, BLOCK_SIZE: 3000.000000, RESOLUTION_LANDSAT: 30.000000, RESOLUTION_SENTINEL2: 10.000000, DO_REPROJ: 0, DO_TILE: 1, ORIGIN_LAT: 48.514552, ORIGIN_LON: 15.400338, PROJECTION: GLANCE7, RESAMPLING: 2, RES_MERGE: 2, TIER: 1, DO_TOPO: 1, DO_ATMO: 1, DO_AOD: 1, DO_BRDF: 1, MULTI_SCATTERING: 1, ADJACENCY_EFFECT: 1, WATER_VAPOR: 1.000000, IMPULSE_NOISE: 1, BUFFER_NODATA: 0, DEM_NODATA: -32767, MASTER_NODATA: -32767, MAX_CLOUD_COVER_FRAME: 100.000000, MAX_CLOUD_COVER_TILE: 100.000000, CLOUD_THRESHOLD: 0.225000, SHADOW_THRESHOLD: 0.020000, NPROC: 32, NTHREAD: 2, PARALLEL_READS: 0, DELAY: 3, TIMEOUT_ZIP: 30, OUTPUT_FORMAT: 1, OUTPUT_DST: 0, OUTPUT_AOD: 0, OUTPUT_WVP: 0, OUTPUT_VZN: 0, OUTPUT_HOT: 0, OUTPUT_OVV: 1
++band # 0 - save 1, nodata: 0, scale: 1.000000
wvl: 0.478263, domain: BLUE, band name: BLUE, sensor ID: LND07
Date: 2019-08-23 (235/34/737170) 09:26:17-00Z
++band # 1 - save 1, nodata: 0, scale: 1.000000
wvl: 0.560585, domain: GREEN, band name: GREEN, sensor ID: LND07
Date: 2019-08-23 (235/34/737170) 09:26:17-00Z
++band # 2 - save 1, nodata: 0, scale: 1.000000
wvl: 0.660991, domain: RED, band name: RED, sensor ID: LND07
Date: 2019-08-23 (235/34/737170) 09:26:17-00Z
++band # 3 - save 1, nodata: 0, scale: 1.000000
wvl: 0.834120, domain: NIR, band name: NIR, sensor ID: LND07
Date: 2019-08-23 (235/34/737170) 09:26:17-00Z
++band # 4 - save 1, nodata: 0, scale: 1.000000
wvl: 1.649829, domain: SWIR1, band name: SWIR1, sensor ID: LND07
Date: 2019-08-23 (235/34/737170) 09:26:17-00Z
++band # 5 - save 1, nodata: 0, scale: 1.000000
wvl: 2.207666, domain: SWIR2, band name: SWIR2, sensor ID: LND07
Date: 2019-08-23 (235/34/737170) 09:26:17-00Z
++band # 6 - save 1, nodata: 0, scale: 1.000000
wvl: 11.000000, domain: TEMP, band name: TEMP, sensor ID: LND07
Date: 2019-08-23 (235/34/737170) 09:26:17-00Z

init and check for stack struct, too?
error in fname. Unable to read parameter from MTL file!

Here is everything that is needed to reproduce the error: https://my.pcloud.com/publink/show?code=kZx1E9kZfuI5HIpsiD8QibgybiqhmYhOcSv7

This is how I tried:
force-l2ps imagery/ parameters.prm

What went wrong?

Argument -n in force-level1-csd not working properly

Issue as discussed by @corneliussenf and others in #77_

The option -n in force-level1-csd does not work as expected when used as 1st argument.

As @ernstste suggested, there are currently two ways to work around this:

  • do not use -n as first argument, but after other arguments
  • use the long option --no-act instead

Unfortunately, we do not have any clue why this happens.
Help wanted!
We are using the getopt utility for this, see https://github.com/davidfrantz/force/blob/develop/bash/force-level1-csd.sh

Best,
David

force-level1-csv area of interest

force-level1-csd fails to read the area of interest from ascii testfiles and command line, at least for the following polygon

projectaoi.txt

-57.11,-10.13
-57.11,-5.42
-54.17,-5.42
-54.17,-10.13
-57.11,-10.13

Error:

Tile list as AOI detected.

Error: One or more tiles seem to be formatted incorrectly.
Please check -57.11.-10.13

The following command was used on BB8:

force-level1-csd \
/data/Jakku/diss_bj/level1 \
/data/Jakku/diss_bj/level1 \
/data/Jakku/diss_bj/level1/queue.txt \
/data/Jakku/diss_bj/level1/projectaoi.txt

Setup

  • FORCE 3.5.2
  • Ubuntu 18 (BB8)

insert valid ranges into index table

ARVI -10000 10000
BLUE 0 10000
EVI -10000 10000
GREEN 0 10000
MNDWI -10000 10000
NBR -10000 10000
NDBI -10000 10000
NDMI -10000 10000
NDSI -10000 10000
NDTI -10000 10000
NDVI -10000 10000
NDWI -10000 10000
NIR 0 10000
RED 0 10000
SARVI -10000 10000
SAVI -10000 10000
SWIR1 0 10000
SWIR2 0 10000
TCB 0 22893
TCDI -3793 35433
TCG -10804 7940
TCW -12915 7032

Feature request: force-level1-csd

Hi

It would be useful if the optional arguments -n and -k could be combined so metadata could be extracted to file for data already on disk. This would be an easy way to find data with certain criteria. Currently I don't get any metadata (csd_metadata_YYYY-MM-DDTHH-MM-SS) when combining these or when searching for data downloaded before.

Regards
/Jonas

[jonasardo@aurora-nateko Uganda]$ force-suite force-level1-csd -d 20170101,20201008, -k -s S2A,S2B -c 0,0 /projects/eko/fs2/FORCE/metadata/ ./L1/ ./L1/TMP.txt T36NXG

Querying the metadata catalogue for Sentinel2 data
Sensor(s): S2A,S2B
Tile(s): T36NXG
Daterange: 20170101 to 20201008
Cloud cover minimum: 0%, maximum: 0%

18 Sentinel2 Level 1 scenes matching criteria found
3.51GB data volume found.

Starting to download 18 Sentinel2 Level 1 scenes

Scene S2A_MSIL1C_20200120T080241_N0208_R035_T36NXG_20200120T094031(18 of 18) exists, skipping...

|========================================================================================================================================================================================================================| 100 %
Done.

force-level1-csd: SQL error selecting geometry column when AOI is specified as vector dataset

When using an vector data set as AOI, force-level1-csd will throw the following error for some users:

ERROR 1: In ExecuteSQL(): sqlite3_prepare_v2(SELECT sentinel2.PRFID FROM sentinel2, aoi WHERE ST_Intersects(sentinel2.geom, ST_Transform(aoi.geom, 4326))): no such column: sentinel2.geom

force-level1-csd creates a geopackage containing tiles/footprints downloaded from the HU WFS server and the user-specified AOI to do an intersect. On some machines, the geometry column of this geopackage is named 'geometry' rather than 'geom', even though 'geom' is defined by the GDAL gpkg driver.

This will be fixed by explicitely specifying the geometry column name when creating the gpkg.

force-level2 blank BOA output for Landsat

I have an issue regarding the Level-2 generation of Landsat imagery on the Dockerized version of FORCE v3.0.1. Maybe it only happens on my FORCE, but it worths a check. This is an issue I experienced also with FORCE v2.x and only with Landsat, not with Sentinel images.

The issue:
A force-level2 runs without errors, the logfile says "...: dc: 62.22%. wc: 1.85%. sc: 0.17%. cc: 25.26%. AOD: 0.0000. # of targets: 346/2037. 56 product(s) written. Success! Processing time: 03 mins 08 secs...", and the level2 tile folders are created with the required raster images (BOA, QAI). The problem is that all BOA files are blank, meaning they contain 0 values at each pixel. What makes it even stranger that the QAI files are not blank at all, they contain the desired flags just as with Sentinel-2 images.

I also tried with a debug version of FORCE using force-l2ps for detailed output but the level2 results are the same. I saved the detailed output in a file: out.txt.

This is how I called force-l2ps from the LC81880272018121LGN00 folder:
force-l2ps imagery/ parameters.prm

Here are my results with the detailed output:
https://my.pcloud.com/publink/show?code=kZNIs9kZHjoRv6qF9m0Miavl3i6AA7HAgbGy

What did I do wrong? As I said, with Sentinel-2 I got good results.

This is how my BOA files look like:
image

Here are the QAI images:
image

force-level1-sentinel2 problems with folder access

I tried to download S2 data using force-level1-sentinel2, query seems to work fine, but then error below occurs. MGRS tile folders are created, but remain emtpy.

[thielf@node128 s2_mv]$ force-level1-sentinel2 /home/thielf/s2_mv/level1 /home/thielf/s2_mv/level1/mvlist.txt "10.5936728817/54.6850110112,10.5936728817/53.1103025704,14.4123810102/53.1103025704,14.4123810102/54.6850110112,10.5936728817/54.6850110112" 2019-08-01 2019-08-31 0 40
2020-03-24_19:12:26 - Found 96 S2A/B files.
2020-03-24_19:12:26 - Found 96 S2A/B files on this page.

/home/thielf/s2_mv/level1/T33UUU/S2B_MSIL1 100%[=============================================================================================>] 796.54M  11.8MB/s    in 56s
ls: cannot access '/home/thielf/s2_mv/level1/T33UUU/S2B_MSIL1C_20190830T102029*.SAFE': No such file or directory

/home/thielf/s2_mv/level1/T33UUV/S2B_MSIL1 100%[=============================================================================================>] 610.87M  12.1MB/s    in 57s
ls: cannot access '/home/thielf/s2_mv/level1/T33UUV/S2B_MSIL1C_20190830T102029*.SAFE': No such file or directory

Statistical metrics (STMs) incorrectly calculated for interpolated time-series that includes repetition

Potential Bug
I have observed some pixels to exhibit incorrect statistics (STMs, e.g. Q10) when comparing the FORCE calculated values to "manually" calculated statistics by looking at the RBF interpolated TSI values. So far, I could only find such a behaviour when the TSI has repetitive values.

Example
RBF based TSI of MNDWI values (zeros are actually NaNs and only displayed as zeros):

image

The associated STM values:

image (1)

[Enhancement] Level2 DEM_NODATA

It is a bit unclear how the level2 processing handles the DEM_NODATA value or if it hast to be defined at all. Is it necessary to define it in the level2 paramter file, even if it is already given in the FILE_DEM image's metadata?

# Nodata value of the DEM.
# Type: Integer. Valid range: [-32767,32767]
DEM_NODATA = -32767

An enhancement could be like this:

# Nodata value of the DEM (optional)
# Type: Integer. Valid range: [-32767,32767]
# Set to NULL to use the nodata value defined in the image metadata.
DEM_NODATA = NULL

force-level1-sentinel2 CLI printouts

force-level1-sentinel2 repeats the total number of files for each page:

2020-03-12_11:21:50 - Found 2535 S2A/B files.
2020-03-12_11:21:50 - Found 100 S2A/B files on this page.
2020-03-12_11:22:04 - Found 2535 S2A/B files.
2020-03-12_11:22:04 - Found 100 S2A/B files on this page.
2020-03-12_11:22:12 - Found 2535 S2A/B files.
2020-03-12_11:22:12 - Found 100 S2A/B files on this page.
...
2020-03-12_11:27:29 - Found 35 S2A/B files on this page.

The progress becomes much more clear if the total number would be shown only once:

2020-03-12_11:21:50 - Found 2535 S2A/B files.
2020-03-12_11:21:50 - Found 100 S2A/B files on this page.
2020-03-12_11:22:04 - Found 100 S2A/B files on this page.
2020-03-12_11:22:12 - Found 100 S2A/B files on this page.
...
2020-03-12_11:27:29 - Found 35 S2A/B files on this page.

copy_string error

For a Landsat scene (LC81210622020202LGN00), force-l2ps exits with error "cannot copy, string too long.". This happens at line 699 of src/lower-level/gas-ll.c:
copy_string(source, NPOW_02, tokenptr)
because apparently tokenptr contains a too long string, printing it gives "MYD". This happens when reading a daily WVP_***.txt file where "MYD" is in the last column. Perhaps there is no end-of-string character for tokenptr? Anyways, replacing line 699 with:
strncpy(source, tokenptr, NPOW_02)
makes the preprocessing succeed.

force-higher-level: wrong sensor names in stacked output

force-higher-level allows to create a stack of a spectral index, e.g. NBR or NDVI, with number of bands = number of level2 input observations.

Each band of the index values stack provides metadata information in FORCE domain, that can be accesses by GDAL as followed:

b = ds.GetRasterBand(1)
b.GetMetadata('FORCE')
{'Date': '2018-02-27T00:00:00.0Z', 'Domain': '20180227', 'Scale': '10000.000', 'Sensor': 'LND08', 'Wavelength': '2018.156', 'Wavelength_unit': 'decimal year'}
b = ds.GetRasterBand(2)
b.GetMetadata('FORCE')
{'Date': '2018-04-17T00:00:00.0Z', 'Domain': '20180417', 'Scale': '10000.000', 'Sensor': 'LND08', 'Wavelength': '2018.290', 'Wavelength_unit': 'decimal year'}

Problem: in case of multiple sensors the Sensor metadata value is always the value of the sensor from the 1st time series observation. In the upper example (see attached zip) the L2 observation for Date:2018-04-17 is from Sensor SEN2B instead of LND08.
bugstamp.zip

[FR] Allow to define class weights in force-train

Problem

Classification reference data can be very imbalanced, e.g. contain many more samples for a class like "forest" compared to rare observations like "bavarian beer tent".
Calibrating a ML classification model on such imbalanced data usually tends to optimize the model on the more frequent classes.

To deal with this, OpenCV allows to define the following parameters:

CvSVMParams::CvSVMParams

class_weights โ€“ Optional weights in the C_SVC problem , assigned to particular classes. They are multiplied by C so the parameter C of class #i becomes class_weights_i * C. Thus these weights affect the misclassification penalty for different classes. The larger weight, the larger penalty on misclassification of data from the corresponding class.

CvDTreeParams::CvDTreeParams

priors โ€“ The array of a priori class probabilities, sorted by the class label value. The parameter can be used to tune the decision tree preferences toward a certain class. For example, if you want to detect some rare anomaly occurrence, the training base will likely contain much more normal cases than anomalies, so a very good classification performance will be achieved just by considering every case as normal. To avoid this, the priors can be specified, where the anomaly probability is artificially increased (up to 0.5 or even greater), so the weight of the misclassified anomalies becomes much bigger, and the tree is adjusted properly. You can also think about this parameter as weights of prediction categories which determine relative weights that you give to misclassification. That is, if the weight of the first category is 1 and the weight of the second category is 10, then each mistake in predicting the second category is equivalent to making 10 mistakes in predicting the first category.

Feature Request

Allow to define class weights in the force-train parameter.prm file. Optimally, it can be used in a 3-class example like:

  • ML_CLASSWEIGHTS = NULL

  • ML_CLASSWEIGHTS = balanced adjust weights inversely proportional to class frequencies in the as n_samples / (n_classes * n_class_samples)

  • ML_CLASSWEIGHTS = 0.2 0.6 0.2 explicit weights, put highest weight on 2nd class

  • ML_CLASSWEIGHTS = 10 20 10 explicit weights, translate into fractions by class_weight / (sum of class weights)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.