unidata / netcdf-java Goto Github PK
View Code? Open in Web Editor NEWThe Unidata netcdf-java library
Home Page: https://docs.unidata.ucar.edu/netcdf-java/current/userguide/index.html
License: BSD 3-Clause "New" or "Revised" License
The Unidata netcdf-java library
Home Page: https://docs.unidata.ucar.edu/netcdf-java/current/userguide/index.html
License: BSD 3-Clause "New" or "Revised" License
The following line in H5header has been commented out (without further explanation):
if (dt.isEnum()) {
Group ncGroup = v.getParentGroup();
EnumTypedef enumTypedef = ncGroup.findEnumeration(mdt.enumTypeName);
if (enumTypedef == null) { // if shared object, wont have a name, shared version gets added later
enumTypedef = new EnumTypedef(mdt.enumTypeName, mdt.map);
// LOOK ncGroup.addEnumeration(enumTypedef);
}
v.setEnumTypedef(enumTypedef);
}
This means that the typedef is not added to the group.
Been there at least since thredds repo branch 4.x.
Will try to fix it in this repo, branch 5.3
PR Undiata/netcdf-java#57 added a new test category: ucar.unidata.util.test.category.Slow
. Currently, tests annotated with that category will always be ignored. At a minimum, we should still run these tests on Jenkins. The bigger question, however, is if we want to be able to turn these on locally with ease (say, with a java option?). More info at #57 (comment).
@lesserwhirls, I'm trying to determine if an issue related loading remote GRIB files was ever addressed. Specifically I'm looking at Unidata/thredds#797
I had a user write in today about trying to load remote GRIB-2 files and I found it was reporting the severe "reading/Creating gbx9 index for file" exception in Grib2CollectionBuilder.
I had a dim recollection that I'd encountered trouble in the past with loading remote GRIB-2 files, and der Google led me right to Unidata/thredds#797
Although my prior issue was using NJ 4.6, I used the latest 5.3 SNAPSHOT today. @cofinoa commented previously that the problem did not occur with NJ 4.2 but that things got broken in 4.3.
Confusingly, there were also some GRIB-2 files in the same remote directory that Grib2Iosp wouldn't claim, and so they were instead reported as "not a valid CDM file".
http://docs.geotools.org/latest/userguide/library/coverage/jp2k.html implies that theres something in imageIO. However, imageIO may use native code.
https://kakadusoftware.com/ is a commercial library.
Seems unlikely, but Im leaving it here for future references.
Upgrade the project to use the latest version of gradle (currently using v3.5.1, latest is v5.6.2). This will allow us to at least attempt to build and test the project with Java 11 (although still only using Java 8 features).
We've been seeing a few situations on Travis (both Oracle and AdoptOpen JDKs) that smell an awful lot like a race condition somewhere related to caching. What happens on Travis is that the Travis times out with the following output:
ucar.nc2.util.cache.TestFileCacheConcurrent > testConcurrentAccess STANDARD_OUT
TestFileCacheConcurrent
loaded 65 files
submit 100 queue size 50 cache: { hits= 0 miss= 33 nfiles= 31 elems= 16
}
done 100
submit 200 queue size 53 cache: { hits= 20 miss= 127 nfiles= 127 elems= 40
}
done 200
submit 300 queue size 54 cache: { hits= 80 miss= 166 nfiles= 50 elems= 19
}
done 300
submit 400 queue size 44 cache: { hits= 113 miss= 243 nfiles= 126 elems= 38
}
done 400
submit 500 queue size 47 cache: { hits= 160 miss= 293 nfiles= 50 elems= 21
}
done 500
submit 600 queue size 48 cache: { hits= 190 miss= 362 nfiles= 119 elems= 41
}
done 600
submit 700 queue size 49 cache: { hits= 233 miss= 418 nfiles= 50 elems= 19
}
done 700
submit 800 queue size 49 cache: { hits= 250 miss= 500 nfiles= 132 elems= 39
}
done 800
submit 900 queue size 49 cache: { hits= 290 miss= 561 nfiles= 70 elems= 24
}
done 900
submit 1000 queue size 15 cache: { hits= 334 miss= 651 nfiles= 158 elems= 47
}
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
ucar.nc2.util.cache.TestNetcdfFileCache STANDARD_OUT
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
InterruptedException=sleep interrupted
ucar.nc2.util.cache.TestNetcdfFileCache > testPeriodicClear STANDARD_OUT
InterruptedException=sleep interrupted
Unfortunately, it's not easily reproducible. It reminds me a lot of what we were seeing on Jenkins, which had the exact same symptoms and output as the ucar.nc2.util.cache.TestNetcdfFileCache > testPeriodicClear
test (but not the others shown above...at least that I can remember), and stopped happening after PR #61.
Version 5.2.0 with openjdk version "1.8.0_212" on Ubuntu 16.4
When opening the above OPeNDAP URL with 5.2 the findDatasetUrl fails with a 404 not found.
When opening the same URL with 4.6.14 the code goes straight to the:
NetcdfDataset ncd = NetcdfDataset.acquireDataset(location, task);
and the data set opens just fine.
NEXRAD will be beta testing some Message 31 adjustments this spring as part of their normal beta testing of RPG/RDA build 19.0. These data are currently available from the FOP1 testbed, and I've attached a sample file that contains the format adjustments. This file does not completely parse with the current code:
and
netcdf-java/cdm/radial/src/main/java/ucar/nc2/iosp/nexrad2/Level2Record.java
Lines 1117 to 1138 in 63a2809
Really, this code should not be hard-coded based on the moment and instead be looking at the data word bits (8 or 16) encoded in the data file.
netcdf-java/cdm/radial/src/main/java/ucar/nc2/iosp/nexrad2/Level2Record.java
Lines 589 to 829 in 63a2809
I...uh...don't know where to begin.
We could probably live without CFP for (well probably forever). The change to ZDR means the data are incorrect for any site shipping the new version. This is why we use what's in the file and don't hard-code decisions unless it's absolutely necessary, boys and girls.
I am trying to finish packaging up a copy of Panoply for macOS using netCDF-Java 5.3 SNAPSHOT. Everything is great right up until almost the end when I have to get the disk image DMG notarized by Apple. That fails because of the presence of libjnidispatch.jnilib in the NJ jar, i.e., netcdfAll-5.3.0-SNAPSHOT.jar/com/sun/jna/darwin/libjnidispatch.jnilib. In short, the notarization process is complaining that the libjnidispatch.jnilib binary is not properly signed.
Note that this problem did not occur last Friday (January 31) when I was making a Mac package, but I have a dim recollection that Apple was going to be closing some loophole for notarizing Java based apps effective February 1. Or perhaps that was apps built on an an old Java, such as the Java 8 that NJ is based on.
I realize that dealing with this is pretty much outside Unidata's purview, but it's something that could completely break my ability to distribute a netCDF-Java based app to macOS users with an up-to-date operating system. But I am wondering what libjnidispatch.jnilib is used for in NJ, and whether it's really necessary?
ETA: I see that JNI is necessary in order to use the library to write NC4 files. A quick and dirty test suggests that removing the offending jnilib from the NJ jar does not cause any breakage when opening and reading datasets, but I will need to test that more throughly before relying on it.
I'm not sure how important it is, but if you look at getUnitsString
for a RadialVariable
, the implementation is:
That's...less than helpful. I know at least NEXRAD puts useful information in the units
attribute. Any reason not to forward onto that here?
A user reported that Panoply could not open an HDF-EOS file and there was a "Conflicting Dimensions" exception message. The same occurs using toolsUI and IDV. Although the copy of Panoply was using netCDF-Java 5.1, I encountered the same problem when investigating using 5.3-SNAPSHOT.
The exception is thrown at line 262 of ucar.nc2.iosp.hdf4.HdfEos
.After adding a lot of additional logging msgs, I eventually found that what was going on is that the particular swath file has metadata stating that the time dimension nTime has length 1, like so:
GROUP=SWATH_1
SwathName="MOP02"
GROUP=Dimension
OBJECT=Dimension_1
DimensionName="Unlim"
Size=-1
END_OBJECT=Dimension_1
OBJECT=Dimension_2
DimensionName="nTime"
Size=1
END_OBJECT=Dimension_2
However, the time dimension actually has 221967 steps, and the various variables that use that dimension specify a MaxdimList of unlimited, e.g.,
OBJECT=DataField_2
DataFieldName="SolarZenithAngle"
DataType=H5T_NATIVE_FLOAT
DimList=("nTime")
MaxdimList=("Unlim")
END_OBJECT=DataField_2
So it seems that on trying to acquire the dataset, the netCDF-Java library is figuring out that there are 221967 timesteps in the example file, but when it circles around and does some testing to verify that this is an HDF-EOS file, it's running into that bit of metadata that says the nTime dimension has size 1. And thus the conflicting dimensions exception gets thrown.
I commented out that exception throw, and was then able to open the file and plot from the data within with problem. However, I don't know that that's the best solution for this problem. 🙄
For an example dataset, I ran my tests using ftp://l5ftl01.larc.nasa.gov/pub/MOPITT/MOP02J.008/2017.08.17/MOP02J-20170807-L2V18.0.3.he5
From the mailing list
When using cdm-core-5.2.0
, when attempting to access a netCDF resource on a server that serves https on port 443 but does not serve http on port 80, attempting to access the https resource fails but with a message indicating it tried port 80.
See redacted stack trace below.
I wonder if it is related to the diff at v5.1.0...v5.2.0#diff-89b73930910f1f42a6af87d6d282a372 but this is speculation.
Failed to open netCDF file https://fqdn-of-server/path/to/netcdf_resource.nc
...
Caused by: ucar.httpservices.HTTPException: org.apache.http.conn.HttpHostConnectException: Connect to fqdn-of-server:80 [fqdn-of-server/x.x.x.x] failed: Connection timed out: connect
at ucar.httpservices.HTTPMethod.executeRaw(HTTPMethod.java:373)
at ucar.httpservices.HTTPMethod.execute(HTTPMethod.java:314)
at ucar.unidata.io.http.HTTPRandomAccessFile.doConnect(HTTPRandomAccessFile.java:136)
at ucar.unidata.io.http.HTTPRandomAccessFile.<init>(HTTPRandomAccessFile.java:60)
at ucar.unidata.io.http.HTTPRandomAccessFile.<init>(HTTPRandomAccessFile.java:40)
at ucar.nc2.NetcdfFile.getRaf(NetcdfFile.java:448)
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:338)
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:305)
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:290)
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:278)
at wres.io.reading.nwm.NWMTimeSeries.openFile(NWMTimeSeries.java:194)
... 52 more
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to fqdn-of-server:80 [fqdn-of-server/x.x.x.x] failed: Connection timed out: connect
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:374)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at ucar.httpservices.HTTPMethod.executeRaw(HTTPMethod.java:366)
... 62 more
Caused by: java.net.ConnectException: Connection timed out: connect
at java.base/java.net.PlainSocketImpl.waitForConnect(Native Method)
at java.base/java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:107)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:591)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
... 72 more
To report a non-security related issue, please provide:
If you have a general question about the software, please view our Suggested Support Process.
I don't think this is actually a problem with the NetCDF Java code, but wanted to get some clarification before I ask others to update the ACDD and NODC netCDF feature template documentation.
There seems to be some incorrect info on those pages on the cdm_data_type global attribute. As I understand it this attribute is specifically for NetCDF Java, and is used as an explicit way to designate the intended FeatureType.
Specifically, the problem applies to timeSeriesProfile (although I haven't experimented with the trajectory types). According to the CF class, the appropriate cdm_data_type (FeatureType) for timeSeriesProfile is STATION_PROFILE. However, the ACDD and NODC documents and example data sets don't seem to be aware of this cdm_data_type, suggesting STATION instead.
This causes problems with NetCDF Java. FeatureTypes/PointFeature seems to work fine with timeSeriesProfile data sets using cdm_data_type STATION, but FeatureTypes/FeatureScan reports an error:
Table TopScalars/PsuedoStructure(time)/MultidimPseudo(time,z) featureType STATION_PROFILE doesnt match desired type STATION
**Failed to find FeatureDatasetFactory for= /media/store/dl/problem.nc datatype=STATION
Language from the NODC template page:
These data types do not map equally to the CF feature types. If the CF feature type = Trajectory Time Series, use "Trajectory"; if Point, Profile, or Time Series Profile, use "Station".
The example NODC timeSeriesProfile cdl uses cdm_data_type STATION and produces above error.
The ACDD wiki page lists an incorrect set of possible values for cdm_data_type:
Current values: vector, grid, textTable, tin, stereoModel, video.
It also links to [the THREDDS InvCatalogSpec page)http://www.unidata.ucar.edu/software/thredds/current/tds/catalog/InvCatalogSpec.html#dataType, which lists an incomplete set of possible data type values and doesn't include STATION_PROFILE.
I'm guessing this is just a documentation problem. If so, it seems like three changes need to happen:
Does that sound right? Based on CF.FeatureType, these seem to be the correct mappings:
CF | NetCDF Java |
---|---|
point | POINT |
profile | PROFILE |
timeSeries | STATION |
timeSeriesProfile | STATION_PROFILE |
trajectory | TRAJECTORY |
trajectoryProfile | SECTION |
For context, I found these problems after following NODC template guidelines in the netCDF encoder I wrote for the IOOS 52n SOS project. I now switched my code to [get the cdm_data_type directly from CF.FeatureType.convert[(https://github.com/ioos/i52n-sos/blob/master/coding-ioos-netcdf/src/main/java/org/n52/sos/encode/AbstractIoosNetcdfEncoder.java#L241), but want to get the documentation in wild corrected.
We have a new failure on Jenkins for dap4.test.TestNc4Iosp.testNc4Iosp
. This started showing up when the way we load IOSPs changed with PR 101 (see https://github.com/Unidata/netcdf-java/pull/101/files for the changes).
My suspicion is that the change caused the dap4 library to use the Hdf5Iosp
instead of the Nc4Iosp
. The code should handle both cases, but I think what we see is that there is a difference in the way the way the two IOSPs see variable metadata (in this case, something about the way enum
is handled). Here is the output from Jenkins:
Netcdf-c library version: 4.6.1 of Mar 30 2018 02:30:19 $
Testcase: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_one_var.nc
Testpath: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_one_var.nc
Baseline: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/TestIosp/baseline/test_one_var.nc.nc4
DMR Comparison:
Files are Identical
DATA Comparison:
Files are Identical
Testcase: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_one_vararray.nc
Testpath: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_one_vararray.nc
Baseline: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/TestIosp/baseline/test_one_vararray.nc.nc4
DMR Comparison:
Files are Identical
DATA Comparison:
Files are Identical
Testcase: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_atomic_types.nc
Testpath: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/testfiles/test_atomic_types.nc
Baseline: /home/ubuntu/jenkins/workspace/netcdf-java/dap4/d4tests/src/test/data/resources/TestIosp/baseline/test_atomic_types.nc.nc4
DMR Comparison:
>>>> 18 CHANGED FROM
enum cloud_class_t primary_cloud;
>>>> CHANGED TO
enum primary_cloud primary_cloud;
>>>> 20 CHANGED FROM
enum cloud_class_t secondary_cloud;
>>>> CHANGED TO
enum secondary_cloud secondary_cloud;
>>>> Dap4 Testing: End of differences.
Look at rewriting FileCache
with Guava LoadingCache
.
error message:
ucar.httpservices.HTTPException: java.net.URISyntaxException: Expected scheme-specific part at index 5: HTTP:
at ucar.httpservices.HTTPAuthUtil.authscopeToURI(HTTPAuthUtil.java:112)
at ucar.httpservices.HTTPSession.init(HTTPSession.java:811)
at ucar.httpservices.HTTPSession.(HTTPSession.java:797)
at ucar.httpservices.HTTPFactory.newSession(HTTPFactory.java:38)
at ucar.nc2.stream.CdmRemote.(CdmRemote.java:79)
at ucar.nc2.stream.CdmRemoteNetcdfFileProvider.open(CdmRemoteNetcdfFileProvider.java:19)
at ucar.nc2.dataset.NetcdfDataset.openOrAcquireFile(NetcdfDataset.java:712)
at ucar.nc2.dataset.NetcdfDataset.openDataset(NetcdfDataset.java:430)
at ucar.nc2.dataset.NetcdfDataset.acquireDataset(NetcdfDataset.java:576)
at ucar.nc2.dataset.NetcdfDataset.acquireDataset(NetcdfDataset.java:536)
at ucar.nc2.ui.ToolsUI.openFile(ToolsUI.java:1271)
at ucar.nc2.ui.op.DatasetViewerPanel.process(DatasetViewerPanel.java:98)
at ucar.nc2.ui.OpPanel.doit(OpPanel.java:173)
at ucar.nc2.ui.OpPanel.lambda$new$0(OpPanel.java:83)
at javax.swing.JComboBox.fireActionEvent(JComboBox.java:1258)
at ucar.ui.prefs.ComboBox.fireActionEvent(ComboBox.java:160)
at javax.swing.JComboBox.setSelectedItem(JComboBox.java:586)
at javax.swing.plaf.basic.BasicComboBoxUI$Handler.actionPerformed(BasicComboBoxUI.java:1943)
at javax.swing.JTextField.fireActionPerformed(JTextField.java:508)
at javax.swing.JTextField.postActionEvent(JTextField.java:721)
at javax.swing.JTextField$NotifyAction.actionPerformed(JTextField.java:836)
at javax.swing.SwingUtilities.notifyAction(SwingUtilities.java:1668)
at javax.swing.JComponent.processKeyBinding(JComponent.java:2882)
at javax.swing.JComponent.processKeyBindings(JComponent.java:2929)
at javax.swing.JComponent.processKeyEvent(JComponent.java:2845)
at java.awt.Component.processEvent(Component.java:6316)
at java.awt.Container.processEvent(Container.java:2239)
at java.awt.Component.dispatchEventImpl(Component.java:4889)
at java.awt.Container.dispatchEventImpl(Container.java:2297)
at java.awt.Component.dispatchEvent(Component.java:4711)
at java.awt.KeyboardFocusManager.redispatchEvent(KeyboardFocusManager.java:1954)
at java.awt.DefaultKeyboardFocusManager.dispatchKeyEvent(DefaultKeyboardFocusManager.java:835)
at java.awt.DefaultKeyboardFocusManager.preDispatchKeyEvent(DefaultKeyboardFocusManager.java:1103)
at java.awt.DefaultKeyboardFocusManager.typeAheadAssertions(DefaultKeyboardFocusManager.java:974)
at java.awt.DefaultKeyboardFocusManager.dispatchEvent(DefaultKeyboardFocusManager.java:800)
at java.awt.Component.dispatchEventImpl(Component.java:4760)
at java.awt.Container.dispatchEventImpl(Container.java:2297)
at java.awt.Window.dispatchEventImpl(Window.java:2746)
at java.awt.Component.dispatchEvent(Component.java:4711)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:760)
at java.awt.EventQueue.access$500(EventQueue.java:97)
at java.awt.EventQueue$3.run(EventQueue.java:709)
at java.awt.EventQueue$3.run(EventQueue.java:703)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:74)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:84)
at java.awt.EventQueue$4.run(EventQueue.java:733)
at java.awt.EventQueue$4.run(EventQueue.java:731)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:74)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:730)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:205)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
NetcdfFile.openInMemory(URI)
converts the URI to a URL and open an input stream from it to copy the contents to a byte array. However, the input stream is never closed, which can leak an connection. In the case that the URI is obtained from a file-based Path, this leaks a handle to the file, preventing the file from being deleted later.
I believe that this could be fixed by opening the URL stream in a try-with-resources block in NetcdfFile.java, as in the following diff. This applies on the version 5.2.0, but the master
branch has the same issue in both NetcdfFile and the utility class NetcdfFiles.
$ git diff
diff --git a/cdm/core/src/main/java/ucar/nc2/NetcdfFile.java b/cdm/core/src/main/java/ucar/nc2/NetcdfFile.java
index b4e14bc9b7..5976d864f1 100644
--- a/cdm/core/src/main/java/ucar/nc2/NetcdfFile.java
+++ b/cdm/core/src/main/java/ucar/nc2/NetcdfFile.java
@@ -725,7 +725,10 @@ public class NetcdfFile implements ucar.nc2.util.cache.FileCacheable, Closeable
@Deprecated
public static NetcdfFile openInMemory(URI uri) throws IOException {
URL url = uri.toURL();
- byte[] contents = IO.readContentsToByteArray(url.openStream());
+ byte[] contents;
+ try (InputStream in = url.openStream()) {
+ contents = IO.readContentsToByteArray(in);
+ }
return openInMemory(uri.toString(), contents);
}
Here's a MWE to reproduce the problem:
import ucar.nc2.NetcdfFile;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
public class OpenInMemoryDoesNotCloseInputStream {
public static void main(String[] args) throws IOException {
/*
* Create a temp file and download a sample netCDF file. This input stream has nothing to do
* with the problem; this is just so that there's an input file to load and that can safely be
* deleted afterward.
*/
Path temp = Files.createTempFile("file", ".nc");
String spec =
"https://www.unidata.ucar.edu/software/netcdf/examples/sresa1b_ncar_ccsm3-example.nc";
URL url = new URL(spec);
try (InputStream in = url.openStream()) {
Files.copy(in, temp, StandardCopyOption.REPLACE_EXISTING);
}
/*
* Read the file into a NetcdfFile. try-with-resources ensures that the NetcdfFile's close()
* method is called, so all resources with it are released.
*/
try (NetcdfFile file = NetcdfFile.openInMemory(temp.toUri())) {
// do stuff with file...
}
/*
* Try to delete the temp file with Files.delete(). This fails with an exception:
*
* java.nio.file.FileSystemException:
* C:\Users\\username\AppData\Local\Temp\file8726442302596323190.nc: The process cannot access
* the file because it is being used by another process.
*/
Files.delete(temp);
}
}
Related to #105
Split the cdm module into the following new modules:
cdm-base
(netcdf3, netcdf4, hdf5, hdf4)cdm-radial
cdm-image
(was clcommon
)cdm-misc
uibase
)I’m using netcdf-java,but I couldn’t find the funtion like the pyphon version.For example of python version,
data = NetcdfFile.open(filePath).varibles[‘temp’][rows]
--rows:Array for many X,Y points,like [[1,2],[4,5]],
This function can support to find points [1,2] and [4,5],tow points values,
but I find the java function only support to find the serial arrays values,is there
a function like python version in java version?
New failure in ucar.nc2.iosp.nexrad2.TestNexrad2.testRead
on Jenkins (possibly related to #37).
java.io.IOException: java.nio.channels.ClosedChannelException
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:500)
at ucar.nc2.dataset.NetcdfDataset.openOrAcquireFile(NetcdfDataset.java:713)
at ucar.nc2.dataset.NetcdfDataset.openFile(NetcdfDataset.java:580)
at ucar.nc2.iosp.nexrad2.TestNexrad2$MyAct.doAct(TestNexrad2.java:56)
at ucar.unidata.util.test.TestDir.actOnAll(TestDir.java:263)
at ucar.unidata.util.test.TestDir.actOnAll(TestDir.java:213)
at ucar.nc2.iosp.nexrad2.TestNexrad2.testRead(TestNexrad2.java:34)
Does not consistently bomb out on a particular file or after processing particular number of files.
java.io.IOException: java.lang.NumberFormatException: For input string: "IVE2"
at ucar.nc2.NetcdfFile.open(NetcdfFile.java:366)
at ucar.nc2.dataset.NetcdfDataset.openProtocolOrFile(NetcdfDataset.java:814)
at ucar.nc2.dataset.NetcdfDataset.openFile(NetcdfDataset.java:673)
at ucar.nc2.iosp.nexrad2.TestNexrad2HiResolution$MyAct.doAct(TestNexrad2HiResolution.java:54)
at ucar.unidata.util.test.TestDir.actOnAll(TestDir.java:299)
at ucar.unidata.util.test.TestDir.actOnAll(TestDir.java:243)
at ucar.nc2.iosp.nexrad2.TestNexrad2HiResolution.testRead(TestNexrad2HiResolution.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
appears to be from Ryan's change on 11/20 at revision:
A user sent me a GRIB2 file of ECMWF flood data that netCDF-Java will not open.
The first problem I encountered in trying to figure out why e.g. Panoply and IDV would not open the file is that it specified Grid Definition 140, which is the Lambert Azimuthal Equal Area projection. Some Googling indicates that this projection was first proposed for addition to GRIB in 2012, which I expect is after most/all of the grids that NJ understands were coded.
After an attempt at hacking Grib2Gds
to accept template 140, I then ran into the problem that the file uses Product Definition 73, which is missing from Grib2Pds
.
Perhaps there are further problems, but that was where I quit.
The 2D GRIB collection datasets currently do something that is not recommended by CF. It's not a requirement, but it is recommended.
As an easy to see example (not a TDS issue, as the code lives in netCDF-Java in the grib
module, but easier to see through the TDS), if we look at the cdl representation of the GFS 80km dataset on thredds-test (cdl shown here via cdmremote), we see things like the following:
netcdf grib/NCEP/GFS/CONUS_80km/TwoD {
dimensions:
reftime = 124;
time2 = 41;
variables:
double reftime(reftime=124);
:units = "Hour since 2019-10-20T00:00:00Z";
double time2(reftime=124, time2=41);
:units = "Hour since 2019-10-20T00:00:00Z";
float Pressure_surface(reftime=124, time2=41, y=65, x=93);
:units = "Pa";
:coordinates = "reftime time2 y x ";
The main issue here, I think, is with the use of time1
(and other multidimensional coordinate variables), specifically that the variable time1
is two dimensional, and there exists a dimension with the same name. The CF spec says:
We recommend that the name of a multidimensional coordinate variable should not match the name of any of its dimensions because that precludes supplying a coordinate variable for the dimension. This practice also avoids potential bugs in applications that determine coordinate variables by only checking for a name match between a dimension and a variable and not checking that the variable is one dimensional.
Strictly speaking, what we do here is CF compliant, but it can cause confusion for clients that do not check to see if the variable of a matching variable/dimension name pair meet the requirement of being a coordinate variable (that is, the variable is 1D), and so not recommended.
One thing we could do to help clarify the situation is to simply rename the time*
dimensions to something like time*_dim
, or rename the time*
variable to something like valid_time*
.
As a side note, if we did change the name of either the dimensions or the variables, it might be nice if we could introduce a new 1D variable with the same dimension name that is something simply like "record_number". That's ugly, but what it would do would allow OPeNDAP to stop exposing a 2D Map for the variables in our "Full" GRIB collections, which is very much not allowed by the OPeNDAP spec (The Maps for a Grid must be 1D). I'll open an issue over on https://github.com/Unidata/tds to capture that bug, as that's certainly a bug related to the CDM -> DAP2 data model translation.
I didnt intend to change this, find out what happened:
From #126: "This started showing up when the way we load IOSPs changed with PR 101 (see https://github.com/Unidata/netcdf-java/pull/101/files for the changes)."
Currently dods, dap4 are done by reflection in ucar.nc2.dataset.NetcdfDataset.openOrAcquireFile
In NetCDF-Java version 5.0.0:
In method makeValuesElement(Variable, boolean)
of class NcMLWriter
there obviously is the intention to separate writing of floating point values and integer values (by the boolean variable isRealType
), but appending to the string builder (buffer) is always made through StringBuilder.append(double)
since the used ternary operator will always have a result of type double
. A statement like if (isRealType) buff.append(iter.getDoubleNext()); else buff.append(iter.getIntNext());
should be used instead.
cdm/core/src/test/data/dataset/SimpleGeos/hru_soil_moist_vlen_3hru_5timestep.nc. (also in outflow_3seg_5timesteps_vlen.nc)
This is a netcdf-4 file with a variable length dimension, eg:
double catchments_x(hruid=3, *);
:axis = "X";
Open enhanced dataset so coordinate systems are added. Then try to read "catchments_x" coordinate, you get:
java.lang.ClassCastException: ucar.ma2.ArrayDouble$D1 cannot be cast to java.lang.Number
at ucar.nc2.dataset.EnhanceScaleMissingUnsignedImpl.convert(EnhanceScaleMissingUnsignedImpl.java:600)
at ucar.nc2.dataset.VariableDS.convert(VariableDS.java:246)
at ucar.nc2.dataset.VariableDS.convert(VariableDS.java:237)
at ucar.nc2.dataset.VariableDS._read(VariableDS.java:413)
at ucar.nc2.Variable.read(Variable.java:609)
at ucar.nc2.dataset.VariableDS.reallyRead(VariableDS.java:422)
at ucar.nc2.dataset.VariableDS._read(VariableDS.java:411)
at ucar.nc2.Variable.read(Variable.java:609)
at ucar.nc2.util.CompareNetcdf2.compareVariableData(CompareNetcdf2.java:508)
at ucar.nc2.util.CompareNetcdf2.compareVariables(CompareNetcdf2.java:296)
at ucar.nc2.util.CompareNetcdf2.compareVariable(CompareNetcdf2.java:268)
at ucar.nc2.util.CompareNetcdf2.compareCoordinateAxis(CompareNetcdf2.java:373)
at ucar.nc2.util.CompareNetcdf2.compareCoordinateSystem(CompareNetcdf2.java:354)
at ucar.nc2.util.CompareNetcdf2.compareVariables(CompareNetcdf2.java:336)
at ucar.nc2.util.CompareNetcdf2.compareGroups(CompareNetcdf2.java:241)
at ucar.nc2.util.CompareNetcdf2.compare(CompareNetcdf2.java:145)
at
This happens at 5.0, would be interesting to know if it happens in 4.x.
Im guessing coordsys logic never tried to deal with a variable length coordinate ?
Related to #105
Split the visad
module into the following new modules:
The dividing line here is that mcidas and gempak IOSPs depend very little on visad.jar
, and can pull in the necessary code to keep them working. the vis5d IOSP depends heavily on visad.jar
, so it can be split off into its own module to allow uses to pull in that dependency or not.
Open questions to address or get into milestone related to the udunits module:
Making an HTTP GET request to http://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time
works and https://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time[0:1:1]
works, but http://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time[0:1:1]
fails with a 403 (request too big).
Perhaps a server side issue, but netCDF-Java could be able to make things work by doing the right thing in terms of using the proper protocol (https
in this case).
In ucar.nc2.dods.DODSNetcdfFile, any dataset url that starts with dods:
is changed to use http:
netcdf-java/opendap/src/main/java/ucar/nc2/dods/DODSNetcdfFile.java
Lines 179 to 192 in 8f15ecf
Of course, that's not always the correct thing to do, but if redirects are handled properly, and the server responds properly, it should all just work. For certain code paths, everything does work. For example, if we look at the following dataset url:
dods://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml
We can open the file using NetcdfDataset.acquireFile()
, and we can successfully read the dds and das because redirects work and the server behaves well. However, if we try to open with NetcdfDataset.openDataset(), we fail because the OPeNDAP server returns a 403 when reading a slice (in this case, trying to get http://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time[0:1:108082]
). It's the "reading a slice" part that seems to be the key.
Doing a GET request on http://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time
works, but once I introduce the constraint, I run into problems. For example, if I try to HTTP Get http://www.ncei.noaa.gov/thredds/dodsC/cdr/gridsat/GridSat-Aggregation.ncml.dods?time[0:1:1]
, I get:
Status = 403 HTTP/1.1 403 Forbidden
Status Line = HTTP/1.1 403 Forbidden
Response Headers =
Date: Thu, 05 Dec 2019 19:45:27 GMT
Server: Apache-Coyote/1.1
Strict-Transport-Security: max-age=31536000
XDODS-Server: opendap/3.7
Content-Description: dods-error
Content-Type: text/plain
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: X-Requested-With, Content-Type
Connection: close
Transfer-Encoding: chunked
ResponseBody---------------
Error {
code = 403;
message = "Request too big=1.1117421067232E7 Mbytes, max=500.0";
};
If I change the same request to use https:
, it works. It's almost like the the entire query (after the ?
) is being dropped after a redirect when requesting a slice of data from a variable.
This behavior is also seen in the latest netCDF-Java 4.6.x code (current master branch over at https://github.com/unidata/thredds). The ability to handle dods:
as a dataset url through NetcdfDataset used to work, at least as recently as 4.6.12-SNAPSHOT (from February of this year), so it's a somewhat recent change affecting both 4.6.x and 5.0.x.
It seems to me that, regardless if this is a server side issue or not (likely is), netCDF-Java could handle this by making the right choice when trying to map dods:
in DODSNetcdfFile
.
Note: the release note (here: https://www.unidata.ucar.edu/blogs/news/entry/netcdf-java-library-and-tds9) point to this repo, but there is no tag for 4.6.14
If you have a general question about the software, please view our Suggested Support Process.
I am attempting to load a large dataset (4200x4100) with 21 timesteps into WMS using THREDDS. When I do so, it fails and the page returns...
<ServiceExceptionReport xmlns="http://www.opengis.net/ogc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.3.0" xsi:schemaLocation="http://www.opengis.net/ogc http://schemas.opengis.net/wms/1.3.0/exceptions_1_3_0.xsd">
<ServiceException>
Unexpected error of type java.lang.IllegalStateException
</ServiceException>
<StackTrace>
<![CDATA[
uk.ac.rdg.resc.edal.cdm.LookUpTable.<init>(LookUpTable.java:109)uk.ac.rdg.resc.edal.cdm.LookUpTableGrid.generate(LookUpTableGrid.java:93)uk.ac.rdg.resc.edal.cdm.CdmUtils.createHorizontalGrid(CdmUtils.java:279)uk.ac.rdg.resc.edal.cdm.CdmUtils.readCoverageMetadata(CdmUtils.java:174)uk.ac.rdg.resc.edal.cdm.CdmUtils.readCoverageMetadata(CdmUtils.java:127)thredds.server.wms.ThreddsDataset.<init>(ThreddsDataset.java:95)thredds.server.wms.ThreddsDataset.getThreddsDatasetForRequest(ThreddsDataset.java:270)thredds.server.wms.ThreddsWmsController.dispatchWmsRequest(ThreddsWmsController.java:165)uk.ac.rdg.resc.ncwms.controller.AbstractWmsController.handleRequestInternal(AbstractWmsController.java:207)org.springframework.web.servlet.mvc.AbstractController.handleRequest(AbstractController.java:174)org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:50)org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861)javax.servlet.http.HttpServlet.service(HttpServlet.java:634)org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)javax.servlet.http.HttpServlet.service(HttpServlet.java:741)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)thredds.servlet.filter.RequestQueryFilter.doFilter(RequestQueryFilter.java:118)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)thredds.servlet.filter.RequestCORSFilter.doFilterInternal(RequestCORSFilter.java:49)org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347)org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)thredds.servlet.filter.RequestPathFilter.doFilter(RequestPathFilter.java:94)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)thredds.server.RequestBracketingLogMessageFilter.doFilter(RequestBracketingLogMessageFilter.java:81)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)org.apache.logging.log4j.web.Log4jServletFilter.doFilter(Log4jServletFilter.java:71)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:660)org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:798)org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:808)org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)java.lang.Thread.run(Thread.java:748)
]]>
</StackTrace>
</ServiceExceptionReport>
and threddsServlet.log returns...
2019-07-30T01:56:09.069 +0000 [ 30415][ 5] ERROR - thredds.server.wms.ThreddsWmsController - dispatchWmsRequest(): Exception: java.lang.IllegalStateException: nLon (=0) and nLat (=2147483647) must be positive and > 0
I notice that 2147483647 is 2^31 (signed 32-bit number). Does WMS handle arrays as 32-bit in some locations that's causing us to run into this limit? Note that OPeNDAP handles this dataset without issue and if I subset the dataset with a stride of 4 (1050x1025), WMS is also able to run without issue.
I have put the datasets on AWS for inspection.
OPeNDAP endpoint... http://54.158.195.139:8080/thredds/dodsC/nwa/v1/NWA_v1_best.ncd.html
WMS endpoint...
http://54.158.195.139:8080/thredds/wms/nwa/v1/NWA_v1_best.ncd?service=WMS&version=1.3.0&request=GetCapabilities
Thanks!
-Joe
With an eye towards supporting the Java Platform Module System, as well as clearly identifying a public API, this is a first pass at reorganizing our modules. For v5.2, the reorg will try to not break the API. Uber artifacts, like toolsUI.jar
and netcdfAll.jar
should remain the same content-wise.
Items:
Will likely need to make adjustments to handle these proposed changes to the CF grid_mapping
attribute:
cf-convention/cf-conventions#224
Also a good reason to look at our support(ish) of WKT and give that some thought.
The removal of ucar.unit.prefs
removed from cdm
impacted the TDS (Unidata/tds#23) (oops). I need to setup a Jenkins run (hopefully tomorrow) to check PRs against netCDF-Java to make sure the TDS still at least compiles (not a show stopper for a PR, but at least we'd have a heads-up), but for now, this issue exists. Maybe the TDS does not need to be storing preferences this way, so one option is to stop the TDS from doing that. If it's needed, then we can do one of a few things:
uibase
(feels dirty)ucar.unit.prefs
from uibase
to an existing module
clcommon
, although that does not feel great because clcommon
→ "Client-side common library" (but TDS depends on that already and it's not client-side, so maybe it's not so bad?)Make HDFS and S3 RandomAccessFile (and company) part of mainline codebase (currently in a feature branch). These should be in separate modules, and use the Server Provider interface, especially since S3 support drags in aws dependencies. See related #109
I use netcdf-java 5.2.0 on Windows with JDK 1.8.0. The file is Netcdf 4 format (HDF style)
It fails when I try to create a netcdf group with :
NetcdfFileWriter n = NetcdfFileWriter.openExisting(filePath);
n.setRedefineMode(true);
Group rootGroup = n.addGroup(null, "");
n.addGroup(rootGroup, "test");
n.setRedefineMode(false);
I get :
java.lang.NullPointerException
at ucar.nc2.jni.netcdf.Nc4Iosp.updateDimensions(Nc4Iosp.java:445)
at ucar.nc2.jni.netcdf.Nc4Iosp.updateDimensions(Nc4Iosp.java:496)
at ucar.nc2.jni.netcdf.Nc4Iosp.flush(Nc4Iosp.java:3502)
at ucar.nc2.NetcdfFileWriter.rewrite(NetcdfFileWriter.java:938)
at ucar.nc2.NetcdfFileWriter.setRedefineMode(NetcdfFileWriter.java:928)
....
TestCoordinatesMatchGbx.readGrib1Files() fails on:
/usr/local/google/home/jlcaron/thredds/cdmUnitTest/formats/grib1/QPE.20101005.009.157 Total_precipitation_surface_Accumulation
expected: 2010-10-05T18:00:00Z
but was : 2010-10-05T12:00:00Z
at ucar.nc2.grib.GribCoordsMatchGbx.readAndTestGrib1(GribCoordsMatchGbx.java:389)
at ucar.nc2.grib.GribCoordsMatchGbx.readCoverageData(GribCoordsMatchGbx.java:179)
at ucar.nc2.grib.GribCoordsMatchGbx.readCoverage(GribCoordsMatchGbx.java:147)
at ucar.nc2.grib.GribCoordsMatchGbx.readCoverageDataset(GribCoordsMatchGbx.java:108)
at ucar.nc2.grib.TestCoordinatesMatchGbx$GribAct.doAct(TestCoordinatesMatchGbx.java:194)
at ucar.unidata.util.test.TestDir.actOnAll(TestDir.java:263)
at ucar.nc2.grib.TestCoordinatesMatchGbx.readAllDir(TestCoordinatesMatchGbx.java:169)
at ucar.nc2.grib.TestCoordinatesMatchGbx.readGrib1Files(TestCoordinatesMatchGbx.java:53)
API docs need to be added to several of the classes down in thredds.catalog.client
. For example, in:
The docs should at least point out that you need to use the builders:
(specifically CatalogBuilder for those wanting to read a THREDDS Client Catalog).
Also need to move https://github.com/Unidata/netcdf-java/blob/master/docs/src/private/website/netcdf-java/reference/ThreddsCatalogs.adoc into the main documentation set, as well as beef it up with some tangible examples (e.g. working with THREDDS Metadata, following catalogRef
's).
Need to add thredds.catalog.client
to the public API generation so that it is accessible via https://docs.unidata.ucar.edu/netcdf-java/<version>/javadoc/
ncml files (and probably more types) are ignored as netcdf files.
The dataset should be opened using the netcdfDataset.open methods.
netcdf-java/dap4/d4cdm/src/main/java/dap4/cdm/dsp/CDMDSP.java
Lines 117 to 118 in c74b0bb
This relates with issue Unidata/tds#30
N.B: @zequihg50 this FYI
There is a problem after changes in the method DataBTree.Node::first. The method ucar.nc2.Variable.read() returns result array that contains invalid values in the first elements of first dimesion of the for example 5AMP-History__0/Bx variable for the specified file.
HistDumpTest9.zip
Deprecate classes in ucar.nc2.dt
.
ucar.nc2.dt.grid
deprecated in favor of ucar.ft2.coverage
ucar.nc2.dt.radial
deprecated in favor of ucar.ft.radial
Actual removal of ucar.nc2.dt
will not occur until netcdf-java v7.
Client code stays here, server code goes to https://github.com/unidata/tds
This issue has been there forever, but the recent change of the default "ignore zero intervals" (was true, now false), has exposed the problem in one or more of our test datasets.
Reproduce by creating an ncx4 from cdmUnitTest/datasets/NDFD-CONUS-5km/.*grib2.
eg put the above expression in ToosUI, IOSP/Grib2/Grib2Collection, then choose (rightmost icon) "Write Index", you will get:
GribCoverageDataset.open failed
java.lang.IllegalStateException: Time2D with type= MRC
at ucar.nc2.grib.coverage.GribCoverageDataset.makeTime2DCoordinates(GribCoverageDataset.java:512)
at ucar.nc2.grib.coverage.GribCoverageDataset.createCoverageCollection(GribCoverageDataset.java:180)
at ucar.nc2.grib.coverage.GribCoverageDataset.open(GribCoverageDataset.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at ucar.nc2.ft2.coverage.CoverageDatasetFactory.openGrib(CoverageDatasetFactory.java:113)
at ucar.nc2.ft2.coverage.CoverageDatasetFactory.openCoverageDataset(CoverageDatasetFactory.java:61)
For NDFD thredds serving, can put the ""ignore zero intervals" back to true. But the general case should be fixed somehow.
We need to enhance StructureData, but extending StructureDS is probably wrong, becuase Sequences dont contain ArrayStructure's.
The enhance functionality should be a mixin I suspect.
See discussion and bug report from [email protected] on Sep 2, 2019 on netcdf-java mailing list
HDF5 has a "version 3" spec. Our code should be reviewed for conformance.
We especially need to gather example files that use version 3 features for testing.
Add a Convention parser that handles CF-Radial Convention.
Currently there is none and those files are not recognized as radial data types.
The test files are in ~cdmUnitTest/conventions/cfradial. Are these current or are there more recent examples? Has the spec evolved? Is it being used?
To do this we need a medium/long range strategy for radial feature types. What should the API look like? Is there feedback from MetPy/Siphon and the radar experts?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.