rhoneyager / libicedb Goto Github PK
View Code? Open in Web Editor NEWA toolbox for snow particle modeling!
Home Page: https://rhoneyager.github.io/libicedb
License: Other
A toolbox for snow particle modeling!
Home Page: https://rhoneyager.github.io/libicedb
License: Other
Include changes/modifications discussed at the last scattering meeting (17/May/2018). Please all: revise the spreadsheet and comment!
I admit I am not very familiar with hdf5 but from standard netcdf I am used to add variable description ("long_name"), units, and also comments. We could just use what we worked out in the Excel data structure table. I remember that most people agreed on it already, or? At least we should give some examples, how people can add these information.
But maybe this is already planned and you just wanted to start with the basic structure. I just wanted to show that I am actively looking into the files ;-)
There seems to be a restriction on what filename extensions that are accepted (.dat, .shape).
My shape files use ".adda", hence I'm not able to import these (without changing the extension). Is there a reason for only specific extensions being allowed?
edit: ADDA output files use, ".geom", which does not seem to be supported as well. I think this extension should be supported at least.
At the moment the code only accepts positive integers as dipole coordinates (uint64_t). When parsing shapefiles if a negative coordinate -x is encountered this is automatically transformed to x
I have a lot of shapefiles centered around the origin which then includes negative coordinates.
Are there good reasons to not assume signed integers as dipole coordinates?
@DaveOri, @stefankneifel, @eec3
For the next meeting (Thursday), let's upload example files for each of our shape models to the FTP site.
Scattering element number is always zero.
Shouldn't it be a progressive number?
We are missing the very important attribute: particle_scattering_element_spacing (inter dipole or inter sphere spacing). At least I couldn't find it in the GMM sample files Eugene sent around.
I just discussed with Davide that it might be useful to generate an index file for each dataset which contains the basic information what is included in the dataset (frequency, size, temperature range, list of particle_ids, etc.). Such a file would allow not always to read the entire hdf5 files if you are just looking for a certain subset of data (particles from 5-6 mm).
Stefan
See the PSU plugin for why this is happening.
particle_scattering_element_coordinates should also have a dimension particle_constituent_number, right? In most cases this will be =1 (only ice) but shouldn't the dimension show up in the variable if I look at it in ncdump?
I tend to include headers into my shape-files. ADDA uses # for single-line comments.
However, this seems to create some issue when importing the given shape-file. I've tried some other common characters but there seems to be no support for single-line comments at all.
For the heck of it, I tried importing a DDASCAT file as well, and that seemed to work well.
I believe some type of line-commenting should be allowed for the shape-files, though I'm not sure exactly what characters should be supported. Multiple characters should be possible without conflicts, i.e. #, %, etc..
particle_id: Are we still aiming at using unique particle identifiers in the end? We could also add a second variable "particle_id_internal" which contains the name used by the developers but the particle_id itself is the "global" and unique identifier which is the only reliable "tag" attached to this structure.
I would suggest to already including also the meta-data into each files as we discussed and listed them in the Excel-sheet.
The shape reading code needs automated testing programs. A set of valid shape files should be provided, and the results of parsing these shapes should be checked to ensure that they are read correctly.
Needed for better user error messages.
I don't really understand why we put the constituent name into a single attribute (particle_single_constituent_name) and not a variable as in the Excel table particle_constituent_name(particle_constituent_number)? Again, for just one constituent it doesn't really matter but if I have 2 or 3 I have to generate more and more new attributes instead of having one variables which contains them all.
I have started trying to import partially melted snowflakes and I am constatly getting segfault.
First memory corruption I have identified in the algorithm that parses the constituents. The algorithm consider the vector of particle constituents to be long: Ndipole*Nmaterials instead of Ndipoles. I have started implementing a solution in branch https://github.com/rhoneyager/libicedb/tree/heterogeneous_particles but I hesitate to pull-request because:
Thanks to answers to #15 I know the second point has some hdf I/O issues as well. We might want to fix those before making additional progress
I have tried to send two emails to the mailing list today, but they have not been sent to the group.
I have realized that my problems with importing shapefiles were due to an old version of ddscat shapefile I was using which requires a different number of header lines before start reading the coordinates.
ADDA scattering codes is able to ingest various shapefile formats transparently.
It might be good to have a look the same feature here
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.