Code Monkey home page Code Monkey logo

libsim's Introduction

Build Status Build Status Build Status Build Status Build Status

LIBSIM

Introduction

Libsim is a collection of Fortran libraries and command-line tools.

The libraries include a general purpose ''base'' library with modules for handling character variables, csv files, command-line arguments, physical constants, date and time computations, georeferenced coordinates, growable arrays and list structures of any type and other.

Another set of libraries is specific to Meteorology and Earth Science and allows to work with gridded and sparse georeferenced data, perform interpolations, statistical processing in time, data quality control, thermodynamic computations.

The ready-to-use command-line tools allow to perform many kinds of space interpolations and time computations on georeferenced data in GRIB and BUFR format.

Libsim strongly relies on DB-All.e/wreport for handling sparse point data and on ECMWF grib_api for gridded data in GRIB format. It can also understand shapefiles and all the Gdal-supported raster formats.

Libsim is written in modern object-oriented-style Fortran 90; since its development started in 2006, only a few modules have been written with real O-O F2003 syntax.

Build and dependencies

The package is meant to be built on Linux with the GNU Compiler Collection (gcc/gfortran, etc.). With limitations, it can be built also on other POSIX-compliant operating systems and/or with other Fortran compilers.

The build process is not trivial and requires the installation of many depending packages.

The SMND package provides a framework for compiling the main libsim dependencies and libsim itself, as well as an universal binary package suitable to be installed on a generic state-of-the-art x86_64 Linux system.

Versioning

From version 7.0.0, libsim follows semver 2.0 for API and ABI changes. The library version (-version-info) follows the libtools conventions.

Documentation

The documentation for all command-line tools can be found in their manpage. All command-line tools also have extensive command-line help that can be accessed using the "--help" option.

The Fortran API documentation is not available online at the moment, it can be generated in the package build process through doxygen.

GRIB and BUFR variables association

The file vargrib2bufr.csv contains a descriptive association between WMO BUFR B table variables hosted in dballe.txt and GRIB parameters.

The csv fields are:

  1. B CODE (in the form Bxxyyy, the numeric part corresponds to the first field of the dballe.txt table)
  2. description
  3. GRIB parameter unit
  4. GRIB originating centre
  5. Table (GRIB1) / Category (GRIB2)
  6. Parameter
  7. Discipline (GRIB2) / 255 (GRIB1)
  8. Scale (to corresponding B CODE units)
  9. Offset (to corresponding B CODE units)

Additional notes:

  • Multiple GRIB parameters can be associated to the same B table variable, but not vice versa (B codes are unique for each phisical variable).
  • BUFR B Table variables are identified by the first field only, while GRIB variables are identified by fields 3 to 7).
  • Values can be missing (empty string for char fields, 255 for numeric fields), and it will be interpreted as a wildcard.
  • If field 5 is ≥ 128 then the orinating centre is considered mandatory and it can't be missing (255).
  • If a variable is present both in GRIB1 and GRIB2 formats, it needs two separate records, one with missing Discipline (255) and one with GRIB2 Disicpline value.
  • A local version of vargrib2bufr.csv can be used by assigning to the environment variable LIBSIM_DATA a path containing a modified version of the file.

Contact and copyright information

The authors of libsim are:
Davide Cesari [email protected]
Paolo Patruno [email protected]

libsim is Copyright (C) 2010-2023 ARPAE-SIMC [email protected]

Libsim is licensed under the terms of the GNU General Public License version 2 or successive. Please see the file COPYING for details.

Contact informations for ARPAE-SIMC (formerly ARPA-SIM):

Agenzia Regionale Prevenzione Ambiente e Energia dell'Emilia-Romagna (ARPAE)
Servizio Idro-Meteo-Clima (SIMC)

Address: Viale Silvani 6, 40122 Bologna, Italy
Tel: + 39 051 6497511
Fax: + 39 051 6497501
Email: [email protected]
Website: https://www.arpae.it/

libsim's People

Contributors

brancomat avatar dcesari avatar edigiacomo avatar lidiabressan avatar pat1 avatar spanezz avatar sunguendoli avatar umberto-pellegrini avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

xiangyi-wang

libsim's Issues

dba_qcfilter scarta tutte i context (livello, timerange) tranne uno.

Il comando dba_qcfilter pare scartare tutte le var tranne quelle statiche (nome stazione, altezza, etc) e quelle relative ad un solo context.

Le prove sono effettuate con libsim-6.1.0-1506.

Ho committato il file di prova dba_qcfilter-bug.bufr nella directory data del branch dba_qcfilter-bug-1.

Il file di test (BUFR generico) contiene un solo messaggio con 4 context:

$ dbamsg dump --interpreted dba_qcfilter-bug.bufr
#0[0] generic message with 4 contexts:
Level 1,-,-,-  tr 1,0,3600  1 vars:
013011 TOTAL PRECIPITATION / TOTAL WATER EQUIVALENT(KG/M**2): 0.2
Level 1,-,-,-  tr 1,0,86400  1 vars:
013011 TOTAL PRECIPITATION / TOTAL WATER EQUIVALENT(KG/M**2): 7.4
Level 103,2000,-,-  tr 254,0,0  2 vars:
012101 TEMPERATURE/DRY-BULB TEMPERATURE(K): 287.95
013003 RELATIVE HUMIDITY(%): 94
Level -,-,-,-  tr -,-,-  6 vars:
001019 LONG STATION OR SITE NAME(CCITTIA5): Camse
001194 [SIM] Report mnemonic(CCITTIA5): locali
005001 LATITUDE (HIGH ACCURACY)(DEGREE): 44.60016
006001 LONGITUDE (HIGH ACCURACY)(DEGREE): 12.07738
007030 HEIGHT OF STATION GROUND ABOVE MEAN SEA LEVEL (SEE NOTE 3)(M): -1.0
007031 HEIGHT OF BAROMETER ABOVE MEAN SEA LEVEL (SEE NOTE 4)(M): 0.0

Filtrando con dba_qcfilter, viene scartata tutto tranne il context di anagrafica e uno degli altri 3:

$ dba_qcfilter < dba_qcfilter-bug.bufr | dbamsg dump --interpreted
#0[0] generic message with 2 contexts:
Level 103,2000,-,-  tr 254,0,0  2 vars:
012101 TEMPERATURE/DRY-BULB TEMPERATURE(K): 287.95
013003 RELATIVE HUMIDITY(%): 94
Level -,-,-,-  tr -,-,-  6 vars:
001019 LONG STATION OR SITE NAME(CCITTIA5): Camse
001194 [SIM] Report mnemonic(CCITTIA5): locali
005001 LATITUDE (HIGH ACCURACY)(DEGREE): 44.60016
006001 LONGITUDE (HIGH ACCURACY)(DEGREE): 12.07738
007030 HEIGHT OF STATION GROUND ABOVE MEAN SEA LEVEL (SEE NOTE 3)(M): -1.0
007031 HEIGHT OF BAROMETER ABOVE MEAN SEA LEVEL (SEE NOTE 4)(M): 0.0

Ho sospetto che nell'output mantenga solo l'ultimo dei context letti, sovrascrivendo i precedenti (con l'esclusione di quella di anagrafica).

[v7d_transform] allocazione di memoria pesante

Ho un db sqlite con precipitazioni cumulate orarie (che allego: ipf.zip) dal quale vengono fatte una serie di cumulate a 90-60-30-15-5 giorni con questa sintassi (con comp-step che varia a seconda dei casi):

v7d_transform --input-format=dba --output-format=csv --comp-stat-proc 1 --comp-step="90 00"  --comp-start="2016-12-29" --csv-header=1 --comp-frac-valid="0.6" "sqlite:/idro_preci_frane.sqlite"  preci_cumulata_90.CSV

Al di là dei tempi (già oggetto di analisi altrove) al momento il problema è l'allocazione di memoria: lanciandolo dentro a /usr/bin/time -v salta fuori un:

Maximum resident set size (kbytes): 2124580

quindi ha dei picchi di allocazione di 2Gb per dei calcoli che non parrebbero probitivi (la cosa è indipendente dal variare di comp-step). E' possibile migliorare/mitigare la situazione?

doc build requirements not checked

the following commands are launched with no check in configure:

  • latex (texlive-latex-bin)
  • dvips (texlive-dvips-bin)
  • dot (graphviz)

I added some dependencies in the spec file but I guess doc compilation should be somehow tested (the commands were failing without stopping the build)

Suspicious use of optio_c

A wrong use of the optio_c function generates NULL characters in log output, all the uses of optio_c should be reviewed (grep -i optio_c */*.?90).

conversione variabili ecmwf / cosmo

sarebbe utile (forse) avere un qualche modo di convertire le variabili (tipo da ecmwf a cosmo e viceversa).

questa una lista incompleta di parametri e fattori di conversione:

98,128,228 * 1000 = 200,2,61 (pioggia)
98,128,144 * 1000 = 200,2,78 + 200,2,79 (neve)
98,128,186 *  100 = 200,2,73 (copertura nubi basse)
98,128,164 *  100 = 200,2,71 (copertura totale nubi)
98,228,24 + 98,128,129 / 9.8 = 200,201,84 (zero termico¹)

¹ Il calcolo dello zero termico è una semplificazione: la 200,201,84 di cosmo-lami usa l'altezza sul livello del mare e un "-999" se l'altezza dello zero termico è inferiore all'orografia, mentre la 98,228,24 di ecmwf è la quota in metri sopra l'orografia che diventa zero se l'altezza dello zero termico è uguale o inferiore all'orografia.
La 98,128,129 è un geopotenziale che (mi dicono) diviso per 9,8 dia l'orografia.
Il procedimento corretto per eventuale implementazione sarebbe fare riferimento ad un ulteriore file grib per l'orografia (--output-format=grib_api:qualcosa.grb ?) e poi sommare i singoli valori della 98,228,24 all'orografia in quel punto e mettere a -999 tutti i valori uguali (o inferiori?) all'orografia

[v7d_transform] comp-start modificata automaticamente quando mancano dati

Questo potrebbe essere sia un bug che una roba da RTFM, temporaneamente la etichetto sia come question che come bug visto che blocca alcuni lavori.

Tema: aggregazione di max/min da istantanee su dati previsti.
Per calcolare le Tmin della corsa 2017-02-15 00:00 su intervallo 6-6 partendo dalle istantanee, faccio una cosa tipo:

v7d_transform --input-format=dba --time-definition=0 --comp-stat-proc=254:3 --comp-step='1 00' --comp-start="2018-02-15 06:00" --set-network=cosmoi7 --output-format=dba --variable-list=B12101 --comp-frac-valid=0 none/@sqlite:/dev/shm/tmp.sqlite none/@sqlite:/dev/shm/kalman.sqlite

(e funziona).

Per la Tmax per la stessa corsa avrei esigenza di partire prima dei dati effettivamente disponibili in modo da avere la Tmax alla +18 (mancherebbero i dati dalle 18 alle 00 ma è ritenuto accettabile).

Ho tentato un:

v7d_transform --input-format=dba --time-definition=0 --comp-stat-proc=254:2 --comp-step='1 00' --comp-start="2018-02-14 18:00" --set-network=cosmoi7 --output-format=dba --variable-list=B12101 --comp-frac-valid=0 none/@sqlite:/dev/shm/tmp.sqlite none/@sqlite:/dev/shm/kalman.sqlite

Ma si comporta come se avessi impostato comp-start="2018-02-15 00:00" e calcola la Tmax su intervallo 0-24. Se invece parto dalle 18 del giorno successivo funziona (ma mi manca un dato, che vorrei).
Uso l'opzione sbagliata?

(il db di partenza è in allegato : tmp.zip )

Nuova variabile per raffica massima di vento IFS ECMWF

Con la nuova dissemina la vecchia variabile 10fg6 (GRIB1,98,128,123) è diventata trioraria cambiando parametro (se no ci si annoiava) in 10gf3 (GRIB1,98,228,28)

Al momento è possibile calcolare i massimi con vg6d_transform --comp-stat-proc=2 solo della vecchia variabile.

grib_dump della nuova variabile:

grib_dump -w shortName=10fg3 JND02080000020818001
***** FILE: JND02080000020818001
#==============   MESSAGE 134 ( length=58366 )             ==============
GRIB {
  editionNumber = 1;
  table2Version = 228;
  # European Centre for Medium-Range Weather Forecasts (grib1/0.table)
  centre = 98;
  generatingProcessIdentifier = 147;
  # 10FG3 10 metre wind gust in the last 3 hours  (m s**-1)
(grib1/2.98.228.table)
  indicatorOfParameter = 28;
  # Surface  (of the Earth, which includes sea surface)  (grib1/3.table)
  indicatorOfTypeOfLevel = 1;
  level = 0;
  # Product with a valid time ranging between reference time + P1 and
reference time + P2 (grib1/5.table)
  timeRangeIndicator = 2;
  # Unknown code table entry (grib1/0.ecmf.table)
  subCentre = 0;
  paramId = 228028;
  #-READ ONLY- cfNameECMF = unknown;
  #-READ ONLY- cfName = unknown;
  #-READ ONLY- cfVarNameECMF = fg310;
  #-READ ONLY- cfVarName = fg310;
  #-READ ONLY- units = m s**-1;
  #-READ ONLY- nameECMF = 10 metre wind gust in the last 3 hours;
  #-READ ONLY- name = 10 metre wind gust in the last 3 hours;
  decimalScaleFactor = 0;
  dataDate = 20170208;
  dataTime = 0;
  # Hour (stepUnits.table)
  stepUnits = 1;
  stepRange = 15-18;
  startStep = 15;
  endStep = 18;

output-format='BUFR:generic-frag' non supported

Dear all,
I use libsim-6.2.5-1.x86_64

I have just tried the following command
v7d_transform --disable-qc --input-format='BUFR' --output-format='BUFR:generic-frag' oss_nc_2703_2505 out.bfr
and I get the following message:
[stderr] ERROR v7d_transform.dballe_class - requested export template generic-frag which does not exist
[stderr] ERROR v7d_transform.dballe_class -
Fatal error: dballe: requested export template generic-frag which does not exist

According to man page, this output format is supported:
--output-format=STRING
format of output file, in the form 'name[:template]'; 'native' for vol7d native binary format (no template to be specified); 'BUFR' and 'CREX' for corresponding formats, with template as an alias like
'synop', 'metar', 'temp', 'generic', empty for 'generic', the special value 'generic-frag' is used to generate bufr on file where ana data is reported only once at beginning and data in other bufr after;

pkg-config file

A libsim.pc file can be useful for programs that links against libsim

$ pkg-config --libs libsim
-lsim_volgrid6d -lsim_vol7d -lsim_termolib -lsim_base -llog4fortran

valgrind checks

At gridinfo_class.F90:630-633 valgrind detects that simpledate may be uninitialized, to be checked.

Calcolo temperatura media giornaliera da istantanea

In allegato, il file t-inst.bufr.tar.gz contiene i dati di temperatura istantanea per la giornata del 2016-11-01 (una misurazione ogni ora dalle 01:00 alle 23:00).

Nonostante ci siano 23 dati istantanei, la media non viene calcolata. Se invece il "buco" è in un orario intermedio (e.g. alle 23:00 invece che alle 00:00 del giorno successivo), allora la media viene calcolata.

$ export LIBSIM_CLIMAT_BEHAVIOR=1
$ export LOG4C_PRIORITY=DEBUG
$ export LOG4C_APPENDER=stderr
$ v7d_transform \
    --variable-list=B12101 \
    --comp-stat-proc=254:0 \
    --comp-step="01 00" \
    --output-variable-list=B12101 \
    --input-format=BUFR \
    --comp-frac-valid=0 \
    --comp-start="2016-11-01 00:00" \
    --comp-stop="2016-11-02 00:00"  \
    --input-format=BUFR \
    --output-format=BUFR \
    t-inst.bufr t-avg.bufr
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - start import vol7d_dballe
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - start import vol7d_dballe ingest
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - end import vol7d_dballe ingest
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - start import vol7d_dballe dba2v7d
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - end import vol7d_dballe dba2v7d
[stderr] INFO     _default - dballe file /usr/share/wreport/dballe.txt found
[stderr] INFO     _default - dballe file /usr/share/wreport/dballe.txt opened
[stderr] INFO     _default - Found 518 variables in dballe master table
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - end import vol7d_dballe
[stderr] INFO     _default - starting peeling
[stderr] INFO     v7d_transform - Converting input data to real for processing.
[stderr] INFO     _default - computing statistical processing by aggregation 254:0
[stderr] DEBUG    _default - fnregister: adding function object td_t_p2ept ; nout 1
[stderr] DEBUG    _default - fnregister: adding function object t_p2pt ; nout 2
[stderr] DEBUG    _default - fnregister: adding function object dd_ff2u_v ; nout 4
[stderr] DEBUG    _default - fnregister: adding function object u_v2dd_ff ; nout 6
[stderr] DEBUG    _default - fnregister: adding function object r_t2td ; nout 7
[stderr] DEBUG    _default - fnregister: adding function object ff_swd_lwb2pgt ; nout 8
[stderr] DEBUG    _default - fnregister: adding function object q_p_t2r ; nout 9
[stderr] DEBUG    _default - fnregister: adding function object t_td2r ; nout 10
[stderr] DEBUG    _default - fnregister: adding function object swb_alb2swd ; nout 11
[stderr] DEBUG    _default - fnregister: adding function object swd_alb2swb ; nout 12
[stderr] DEBUG    _default - fnregister: adding function object swdir_swdif2swd ; nout 13
[stderr] DEBUG    _default - fnregister: adding function object t_p_w2omega ; nout 14
[stderr] DEBUG    _default - fnregister: adding function object snow_ls_conv2tot ; nout 15
[stderr] DEBUG    _default - fnregister: adding function object rain_snow_conv2tot ; nout 16
[stderr] DEBUG    _default - fnregister: adding function object rain_snow_ls_conv2tot ; nout 17
[stderr] DEBUG    _default - fnregister: adding function object z_fi ; nout 18
[stderr] DEBUG    _default - fnregister: adding function object fi_z ; nout 19
[stderr] DEBUG    _default - fnregister: adding function object z_h ; nout 20
[stderr] DEBUG    _default - fnregister: adding function object h_z ; nout 21
[stderr] DEBUG    _default - fnregister: adding function object r_p_t2q ; nout 22
[stderr] DEBUG    _default - fnregister: adding function object swgwmo2swgsim ; nout 23
[stderr] DEBUG    _default - fnregister: adding function object swgsim2swgwmo ; nout 24
[stderr] DEBUG    _default - fnregister: adding function object copyB12101 ; nout 25
[stderr] INFO     _default - alchemy_v7d: I have:   B12101
[stderr] INFO     _default - alchemy_v7d: To make:  B12101
[stderr] DEBUG    _default - oracle: delete and register
[stderr] DEBUG    _default - oracle: order 0
[stderr] DEBUG    _default - oracle: register  copyB12101
[stderr] DEBUG    _default - fnregister: adding function object copyB12101 ; nout 1
[stderr] DEBUG    _default - oracle: other register  copyB12101
[stderr] DEBUG    _default - fnregister: adding function object copyB12101 ; nout 1
[stderr] INFO     _default - alchemy_v7d: I need 1 variables
$ ls -lah t-avg.bufr
-rw-rw-r-- 1 edg edg 0 Nov  9 16:08 t-avg.bufr

[v7d_transform] ambiguous exit status 1 when result is empty

E.g. v7d_transform exit with status 1 when it "Cannot make variable you have requested".

It would be useful to distinguish an error (e.g. file not found) from an empty result.

[stderr] INFO     _default - dballe file /usr/share/wreport/dballe.txt found
[stderr] INFO     _default - dballe file /usr/share/wreport/dballe.txt opened
[stderr] INFO     _default - Found 554 variables in dballe master table
[stderr] INFO     _default - starting peeling
[stderr] INFO     _default - computing statistical processing by aggregation 254:0
[stderr] WARN     _default - vol7d_compute, no timeranges suitable for statistical processing by aggregation
[stderr] INFO     _default - alchemy_v7d: To make:  B10004
[stderr] WARN     _default - alchemy_v7d: I cannot make your request
[stderr] ERROR    v7d_transform - Cannot make variable you have requested
[stderr] ERROR    v7d_transform - use --display to get more information
[stderr] ERROR    v7d_transform - Exit for error

[v7d_transform] errore su estrazione attributi

Ho un db dballe sqlite con le seguenti variabile specificate come attributi in una particolare network: B33201, B33202, B33203, B33204, B33205, B33206, B33207, B33208 (legate al filtro di kalman)

su provami-qt vedo tutto correttamente, il problema avviene in estrazione:

$ dbadb wipe --dsn=sqlite:///dev/shm/out.sqlite
$ export LOG4C_PRIORITY=debug
$ v7d_transform --input-format=dba --output-format=dba --network-list=kalman-cosmoi7 --variable-list="B12101" --attribute-list="B33207,B33208" --start-date="2018-02-14 06:00" --end-date="2018-02-14 06:00" sqlite:/dev/shm/kalman.sqlite sqlite:/dev/shm/out.sqlite
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - start import vol7d_dballe
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - start import vol7d_dballe ingest for constant station data
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - end import vol7d_dballe ingest
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - start import vol7d_dballe dba2v7d
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - end import vol7d_dballe dba2v7d
[stderr] DEBUG    v7d_transform.vol7d_dballe_class - start import vol7d_dballe ingest for station data
[stderr] ERROR    v7d_transform.dballe_class - QC value names must start with '*'
[stderr] ERROR    v7d_transform.dballe_class - 
[stderr] INFO     v7d_transform.dballe_class - 
Fatal error: dballe: QC value names must start with '*'

secondo la manpage:

if attribute-list is not provided all present attributes in input will be imported

ho provato quindi a dare lo stesso comando omettendo attribute-list, aspettandomi di ritrovare tutti gli attributi nel nuovo db ma (sempre guardandoci con provami-qt) li perdo tutti. Sembra che per libsim non siano attributi (se ha senso).

Il db originale è troppo pasciuto per allegarlo, si trova al momento su /autofs/scratch2/dbranchini/kalman.sqlite

[v7d_transform] --network-list option apparently uses only last network

from the manpage:

       -n STRING, --network-list=STRING

              if input-format is of database type, list of station networks to
              be  extracted  in the form of a comma-separated list of alphanu‐
              meric network identifiers [default=]

but if I try:

v7d_transform --input-format=dba --output-format=dba --network-list="kalman-ecmwf,ecmwf" --timerange="3,151200,86400" cippo/lippo@sqlite:/dev/shm/kalman.sqlite cippo/lippo@sqlite:/dev/shm/ecmwf.sqlite

I only obtain the last network (ecmwf in the example, while swapping the networks results in having kalman-ecmwf)
Changing --output-format=native gave the same result.

getpoint_transform_test.sh fail on fedora 25

make[2]: ingresso nella directory "/root/rpmbuild/BUILD/libsim-6.1.13-1/bin"
make[2]: Nessuna operazione da eseguire per "dba_qcfilter_test.sh".
make[2]: Nessuna operazione da eseguire per "getpoint_transform_test.sh".
make[2]: uscita dalla directory "/root/rpmbuild/BUILD/libsim-6.1.13-1/bin"
make  check-TESTS
make[2]: ingresso nella directory "/root/rpmbuild/BUILD/libsim-6.1.13-1/bin"
make[3]: ingresso nella directory "/root/rpmbuild/BUILD/libsim-6.1.13-1/bin"
PASS: dba_qcfilter_test.sh
FAIL: getpoint_transform_test.sh
============================================================================
Testsuite summary for libsim 6.1.13
============================================================================
# TOTAL: 2
# PASS:  1
# SKIP:  0
# XFAIL: 0
# FAIL:  1
# XPASS: 0
# ERROR: 0
============================================================================
See bin/test-suite.log
Please report to [email protected]
============================================================================
Makefile:1027: set di istruzioni per l'obiettivo "test-suite.log" non riuscito
cat bin/test-suite.log
=======================================
   libsim 6.1.13: bin/test-suite.log
=======================================

# TOTAL: 2
# PASS:  1
# SKIP:  0
# XFAIL: 0
# FAIL:  1
# XPASS: 0
# ERROR: 0

.. contents:: :depth: 2

FAIL: getpoint_transform_test.sh
================================

+ tmpfile=test_3854
+ '[' '' = installed ']'
+ pref=./
+ ./vg6d_getpoint --trans-type=inter --sub-type=near --output-format=native ../data/t_p.grb test_3854.v7d
FAIL getpoint_transform_test.sh (exit status: 1)

Integrate vargrib2bufr

Variabili da aggiungere a vargrib2bufr:

vargrib2varbufr: variable 200:2:121:0 not found
vargrib2varbufr: variable 200:201:198:0 not found

sperando che abbiano una definizione nella Btable

Implement dballe optimisation query=attrs

Since ARPA-SIMC/dballe#117 we can set attrs as a query key to speed up dballe queries that want attributes, likely in vol7d_dballe_class.F03 around l. 352:

+IF (c_e(query)) THEN
+! IF (LEN_TRIM(qury) > 0) THEN...  
+  query = TRIM(query)//',attrs'
+ELSE
+  query = 'attrs'
+ENDIF

and 552:

-  myfilter=dbafilter(filter=filter,contextana=.TRUE.,query=cmiss)
+  myfilter=dbafilter(filter=filter,contextana=.TRUE.,query='attrs')

[v7d_transform] comp-start not working

v7d_transform --input-format=BUFR --comp-stat-proc=254:2 --output-variable-list="B12101" --comp-step='1 00' --comp-start="2017-11-20 18:00" --output-format=csv  --comp-frac-valid=0 t2m.bufr tmax.csv

...creates a tmax.csv with no data. Changing the comp-start hour to "00:00" works.

(@dcesari already know the details and has the needed test data in /tmp/kalman)

Extend variable table with center 80

BUFR-GRIB variable table file vargrib2bufr.csv should be extended to contain COSMO-specific and some ECMWF-specific grib1 entries also for centre 80, Italy. A (python?) program could be a good way to do this.

dballe_class: ingest(metaanddatav,filter=filter) with filter with more attributes return the first only attribute

call session%ingest(metaanddatav,filter=filter) with filter with more attributes return the first only attribute

allocate (starvars%dcv(4))
allocate (starvars%dcv(1)%dat,source=dbadatab(qcattrvarsbtables(1)))
allocate (starvars%dcv(2)%dat,source=dbadatab(qcattrvarsbtables(2)))
allocate (starvars%dcv(3)%dat,source=dbadatab(qcattrvarsbtables(3)))
allocate (starvars%dcv(4)%dat,source=dbadatab(qcattrvarsbtables(4)))
filter=dbafilter(starvars=starvars)

do while (sessionfrom%messages_read_next())
call sessionfrom%set(filter=filter)
call sessionfrom%ingest(metaanddatav,filter=filter)
do i =1,size(metaanddatav)

call metaanddatav(i)%display()

end do
end do

[vg6d_transform] calcolo umidità massima e minima istantanea da umidità relativa istantanea per dati previsti

cugino stretto di #8 (del quale incidentalmente confermo chiusura)

la situazione è più o meno simile, con a monte la creazione del grib di umidità relativa sempre via libsim:
$ vg6d_transform --output-variable-list=B13003 prou.grib urel.grib

senonché:

$ vg6d_transform --comp-stat-proc=254:2 --comp-step='1 00' urel.grib umax.grib
[stderr] ERROR    _default - timerange_v7d_to_g1: GRIB2 statisticalprocessing 2 cannot be converted to GRIB1.

allego grib e e log con LOG4C_PRIORITY settato a debug (spoiler alert: non dice molto di più)

urel.zip

Coding long timeranges

vg6d_transform imports correctly grib1 files with instantaneous data and very long timeranges (occupying 2 octets, from climatic datasets), as it can be seen using --display, but in the output the timerange information is lost:

vg6d_transform --display longtimerange.grib output.grib
grib_compare longtimerange.grib output.grib

It is probably gridinfo_class' fault.
longtimerange.zip

[Fedora 24] format not a string literal and no format arguments

(che in realtà è un warning, ma visto che il default di F24 è -Werror=format-security sarebbe da risolvere)

libtool: compile:  gfortran -DHAVE_CONFIG_H -I. -I.. -I../base -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -I/usr/lib64/gfortran/modules -fbackslash -c log4fortran.F90 -o log4fortran.o >/dev/null 2>&1
/bin/sh ../libtool  --tag=CC   --mode=compile gcc -DHAVE_CONFIG_H -I. -I..     -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -c -o log4fortran_c.lo log4fortran_c.c
libtool: compile:  gcc -DHAVE_CONFIG_H -I. -I.. -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -c log4fortran_c.c  -fPIC -DPIC -o .libs/log4fortran_c.o
log4fortran_c.c: In function 'l4f_category_log_':
log4fortran_c.c:57:57: error: format not a string literal and no format arguments [-Werror=format-security]
   log4c_category_log(cnfCptr(*a_category), *a_priority, ptr_a_format);
                                                         ^~~~~~~~~~~~
log4fortran_c.c: In function 'l4f_log_':
log4fortran_c.c:76:48: error: format not a string literal and no format arguments [-Werror=format-security]
   log4c_category_log(default_cat, *a_priority, ptr_a_format);
                                                ^~~~~~~~~~~~
cc1: some warnings being treated as errors
Makefile:486: recipe for target 'log4fortran_c.lo' failed
make[2]: *** [log4fortran_c.lo] Error 1

Uninitialised POINTERs

Nell'ultimo commit ho inserito anche due inizializzazioni di puntatore che su Fedora 20 erano perdonate, ora una fa fallire make check, controlla per favore che siano giuste.

example_dballe do not work

./example_dballe
[dummy] ERROR /root/git/libsim/examples/.libs/lt-example_dballe[ 2016 1 29 60 10 48 10 786].dballe_class - option anaflag, dataflag, attrflag defined with filename access

get rid of cnf (long term enhancement)

As in 2018-05-07 internal thread, F2003 C interface should render sooner or later cnf (for log4c/log4fortran) unnecessary.

It's still possible to use --disable-log4c to get rid of the dependency.

Cleanup

When Oracle support will be removed (or before, if possible), remember to delete the file bin/db_utils.F90 and avoid the use of parse_dba_access_info since username and password are not supported and they are contained in the database url/filename (dsn = url).

total snowfall in vargrib2bufr.csv

Dear all,
I noticed that it is possible to generate total snowfall (B13205) from grid-scale+convective snowfall if the grib files are encoded with centre=98 or centre=200. Is it possible to add also centre=80 to support COSMO-LEPS products?
Grazie 1000.
Andrea

Output in dballe-compatible csv

Define in v7d_transform a shortcut for creating output in dballe-compatible csv format, now it is necessary to type

--output-format=csv --csv-loop=time,timerange,level,ana,network,var \
--csv-column=ana,network,time,level,timerange,var,value

e.g. define --output-format=csvdba

Add computation of frequencies on box

It would be useful to add the computation of the fraction of points > (or < or a<x<b) a specified threshold within a box/polygon, etc. from gridded and sparse points.

Missing var B14198 from vargrib2bufr.csv

Nella commit f50d919 la var B14198 è stata rimossa da vargrib2bufr.csv.

La suddetta var viene usata per generare il GRIB per il dataset erg5 a partire dai dati dell'area agro:

  • Conversione dei dati da VM (Radiazione visibile media oraria componente dall'alto, id Oracle409) a BUFR (var=B14198,level=1,-,-,-,trange=0,0,3600`)
  • Conversione dei BUFR in GRIB2 con v7d_transform

E' possibile reintrodurre tale var?

(Assegno il bug a @dcesari anche se sarebbe di Enrico Minguzzi, che non ha un account GitHub).

make check fallito su char_utilities_test

Su richiesta di Paolo ho riabilitato il make check in libsim (che è cosa buona e giusta) ma fallisce.

questo l'output di test-suite.log:

========================================
   libsim 6.1.11: base/test-suite.log
========================================

# TOTAL: 4
# PASS:  3
# SKIP:  0
# XFAIL: 0
# FAIL:  1
# XPASS: 0
# ERROR: 0

.. contents:: :depth: 2

FAIL: char_utilities_test
=========================

 === Testing char_utilities module ===
 Checking int_to_char
 Checking byte_to_char
 Checking real_to_char
 Checking double_to_char
 Checking l_nblnk
 Checking l_nblnk degenerated
 Checking f_nblnk
 Checking f_nblnk degenerated
FAIL char_utilities_test (exit status: 1)

vg6d_subarea e UTM

vg6d_subarea manca dei parametri necessari all'interpolazione in UTM, pur avendone le capacità.

v7d_transform segfaults importing TEMP

The command

v7d_transform --variable-list=B10004,B12101,B12103 --input-format=BUFR --output-format=csv \
 temp.bufr o.csv

crashes with a segmentation fault in

#0  0x00007ff9f87649ed in vol7d_dballe_class::dba2v7d (this=..., warning: Range for type <error type> has invalid bounds 1..-6908486503888838654
metaanddatav=warning: Range for type <error type> has invalid bounds 1..-6908486503888838654

..., time_definition=<optimized out>, set_network=...)
    at vol7d_dballe_class.F03:798

while without --variable-list=B10004,B12101,B1210 it works as expected.
temp.zip

[vg6d_transform] calcolo temperature massime e minime da t2metri per dati previsti

Attualmente è possibile aggregare tmax e tmin già calcolare tipo ad esempio
vg6d_transform --comp-stat-proc=[2|3] --comp-step='1 00' --comp-full-steps in.grib out.grib

Sarebbe forse interessante implementarne il calcolo direttamente dalla t2m per i modelli che escono senza quel tipo di variabile (tipo cosmo 2.8)
A monte c'è qualche problema di specifiche però sul calcolo della tmin, a differenza dello 0-24 della tmax ho visto implementazioni 9-9, 18-06, 12-12...

Optimize grid-to-sparse-points metamorphosis transform

The grid-to-sparse-points metamorphosis transformation (e.g. in the vg6d_getpoint command) takes a nonlinear amount of time when checking for the geographical unicity of the points; this check should be relaxed or optimised when sparse data come from a regular grid as in the indicated case.

Errore in export BUFR

Importando ed esportando un bufr con v7d_transform ottengo un dballe-errore in esportazione. L'importazione pare corretta a giudicare da --display.

Con dballe-7.16-1 l'errore è:

Fatal error: dballe: no variables found in input record

Con dballe-7-11-1 un più loquace:

Fatal error: dballe: cannot insert attributes for variable 000000: the last prendilo inserted 2 variables, and *var_related is not set

Il comando è:

v7d_transform --input-format=BUFR --output-format=BUFR prova20.bufr out.bufr

Stesso errore esportando su dba; esportando su formati csv o nativo funziona.
Allegato prova20.bufr.gz, SYNOP ECMWF.

Dballe tests fail

dballe_test, dballe_test2, dballe_test3, dballe_test.sh fail, even checking out older versions (6.2.4), new dballe incompatibility?

[stderr] ERROR    dballe_test.dballe_class - cannot compile query 'SELECT id, memo, description, prio, descriptor, tablea FROM repinfo ORDER BY i
[stderr] ERROR    dballe_test.dballe_class - 
Fatal error: dballe: cannot compile query 'SELECT id, memo, description, prio, descriptor, tablea FROM repinfo ORDER BY id':no such table: repinfo
FAIL dballe_test (exit status: 1)

Error message not helpful

vg6d_getpoint --trans-type=inter --sub-type=linear --lon=11.34000 --lat=44.50000 \
    input.grib output.bufr

gives the following subinformational or misleading error message

[stderr] ERROR    getpoint.grid_transform_class.transform - horizontal interpolation
[stderr] ERROR    getpoint.grid_transform_class.transform - inconsistent input shape: 2147483647,2147483647 /= 297,313

(the correct command line would be linear->bilin)

error in a program that reads raingauges data and attributes from dballe and writes them to a text file

In the module preci_anagrafica_stazioni_class.f90 a segmentation fault error is produced when trying to assign the index of the staz code attribute to the var indstazid.
I have attached the programs and the data into set_prove.tar.gz.

set_prove.tar.gz

The error:

Program received signal SIGSEGV, Segmentation fault.
__memcmp_ssse3 () at ../sysdeps/x86_64/multiarch/memcmp-ssse3.S:1862
1862 movl -10(%rdi), %eax
Missing separate debuginfos, use: dnf debuginfo-install SuperLU-5.2.0-1.fc24.x86_64 armadillo-7.800.2-1.fc24.x86_64 arpack-3.3.0-2.b0f7a60git.fc24.x86_64 atlas-3.10.2-12.fc24.x86_64 blas-3.6.1-1.fc24.x86_64 bzip2-libs-1.0.6-21.fc24.x86_64 cfitsio-3.370-6.fc24.x86_64 expat-2.1.1-2.fc24.x86_64 fontconfig-2.11.94-7.fc24.x86_64 fortrangis-2.6-1.x86_64 freetype-2.6.3-3.fc24.x86_64 freexl-1.0.2-3.fc24.x86_64 gdal-libs-2.0.2-6.fc24.x86_64 giflib-4.1.6-15.fc24.x86_64 jasper-libs-1.900.13-3.fc24.x86_64 jbigkit-libs-2.1-5.fc24.x86_64 lapack-3.6.1-1.fc24.x86_64 lcms2-2.8-2.fc24.x86_64 libICE-1.0.9-8.fc24.x86_64 libSM-1.2.2-4.fc24.x86_64 libX11-1.6.3-3.fc24.x86_64 libXau-1.0.8-6.fc24.x86_64 libaio-0.3.110-6.fc24.x86_64 libbsd-0.8.3-1.fc24.x86_64 libdap-3.17.2-1.fc24.x86_64 libgeotiff-1.4.0-7.fc24.x86_64 libgta-1.0.7-3.fc24.x86_64 libsim-6.1.15-1.x86_64 libspatialite-4.3.0a-2.fc24.x86_64 libtiff-4.0.7-2.fc24.x86_64 libtool-ltdl-2.4.6-13.fc24.x86_64 libuuid-2.28.2-2.fc24.x86_64 libwebp-0.5.2-1.fc24.x86_64 libxcb-1.11.1-2.fc24.x86_64 libxml2-2.9.3-3.fc24.x86_64 log4c-1.2.4-8.fc24.x86_64 lua-libs-5.3.4-1.fc24.x86_64 mariadb-libs-10.1.21-3.fc24.x86_64 netcdf-fortran-4.4.3-2.fc24.x86_64 nss-3.29.3-1.0.fc24.x86_64 nss-softokn-freebl-3.29.3-1.0.fc24.x86_64 nss-util-3.29.3-1.0.fc24.x86_64 ogdi-3.2.0-0.26.beta2.fc24.x86_64 openblas-openmp-0.2.19-4.fc24.x86_64 openjpeg2-2.1.2-3.fc24.x86_64 openssl-libs-1.0.2k-1.fc24.x86_64 pcre-8.40-5.fc24.x86_64 poppler-0.41.0-3.fc24.x86_64 proj-4.9.2-2.fc24.x86_64 shapelib-1.3.0f-9.fc24.x86_64 unixODBC-2.3.4-2.fc24.x86_64 xerces-c-3.1.4-1.fc24.x86_64 xz-libs-5.2.2-2.fc24.x86_64
(gdb) where
#0 __memcmp_ssse3 () at ../sysdeps/x86_64/multiarch/memcmp-ssse3.S:1862
#1 0x00007ffff7a6a99e in __vol7d_var_class_MOD_vol7d_var_eq () from /lib64/libsim_vol7d.so.6
#2 0x00007ffff7a6acfa in __vol7d_var_class_MOD_index_var () from /lib64/libsim_vol7d.so.6
#3 0x000000000040a04e in preci_anagrafica_stazioni_class::scrivi_preci_stazioni_tab_txt (
this=<error reading variable: value requires 490227048 bytes, which is more than max-value-size>, giornaliere=.FALSE.)
at preci_anagrafica_stazioni_class.f90:351
#4 0x00000000004030f3 in leggi_preci_da_oracle_e_scrivi_file () at leggi_preci_da_dballe_e_scrivi_file.f90:186
#5 0x00000000004020bd in main (argc=argc@entry=1, argv=0x7fffffffe260) at leggi_preci_da_dballe_e_scrivi_file.f90:18
#6 0x00007ffff6567731 in __libc_start_main (main=0x4020a0

, argc=1, argv=0x7fffffffdf38, init=,
fini=, rtld_fini=, stack_end=0x7fffffffdf28) at ../csu/libc-start.c:289
#7 0x00000000004020f9 in _start ()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.