Code Monkey home page Code Monkey logo

hdf5's Introduction

HDF5 version 1.15.0 currently under development

HDF5 Logo

develop cmake build status develop autotools build status HDF-EOS5 build status netCDF build status h5py build status CVE regression HDF5 VOL connectors build status HDF5 VFD build status 1.14 cmake build status 1.14 autotools build status BSD

HPC configure/build/test results

Please refer to the release_docs/INSTALL file for installation instructions.

This repository contains a high-performance library's source code and a file format specification that implements the HDF5® data model. The model has been adopted across many industries, and this implementation has become a de facto data management standard in science, engineering, and research communities worldwide.

The HDF Group is the developer, maintainer, and steward of HDF5 software. Find more information about The HDF Group, the HDF5 Community, and other HDF5 software projects, tools, and services at The HDF Group's website.

DOCUMENTATION

This release is fully functional for the API described in the documentation.

https://hdfgroup.github.io/hdf5/develop/_l_b_a_p_i.html

Full Documentation and Programming Resources for this release can be found at

https://hdfgroup.github.io/hdf5/develop/index.html

The latest doxygen documentation generated on changes to develop is available at:

https://hdfgroup.github.io/hdf5/develop

See the RELEASE.txt file in the release_docs/ directory for information specific to the features and updates included in this release of the library.

Several more files are located within the release_docs/ directory with specific details for several common platforms and configurations.

INSTALL - Start Here. General instructions for compiling and installing the library
INSTALL_CMAKE  - instructions for building with CMake (Kitware.com)
INSTALL_parallel - instructions for building and configuring Parallel HDF5
INSTALL_Windows and INSTALL_Cygwin - MS Windows installations.

HELP AND SUPPORT

Information regarding Help Desk and Support services is available at

https://help.hdfgroup.org

FORUM and NEWS

The HDF Forum is provided for public announcements and discussions of interest to the general HDF5 Community.

These forums are provided as an open and public service for searching and reading. Posting requires completing a simple registration and allows one to join in the conversation. Please read the instructions pertaining to the Forum's use and configuration.

RELEASE SCHEDULE

HDF5 release schedule

HDF5 does not release on a regular schedule. Instead, releases are driven by new features and bug fixes, though we try to have at least one release of each maintenance branch per year. Future HDF5 releases indicated on this schedule are tentative.

Release New Features
1.14.5 oss-fuzz fixes, ros3 VFD improvements
1.14.6 Last maintenance release of 1.14
1.16.0 Complex number support, updated library defaults (cache sizes, etc.)
2.0.0 Multi-threaded HDF5, crashproofing / metadata journaling, Full (VFD) SWMR, encryption, digital signatures, sparse datasets, improved storage for variable-length datatypes, better Unicode support (especially on Windows), semantic versioning

Some HDF5 2.0.0 features listed here may be released in a 1.16.x release.

This list of feature release versions is also tentative, and the specific release in which a feature is introduced may change.

SNAPSHOTS, PREVIOUS RELEASES AND SOURCE CODE

Periodically development code snapshots are provided at the following URL:

https://github.com/HDFGroup/hdf5/releases/tag/snapshot

Source packages for current and previous releases are located at:

https://portal.hdfgroup.org/Downloads

Development code is available at our Github location:

https://github.com/HDFGroup/hdf5.git

hdf5's People

Contributors

bljhdf avatar bmribler avatar brtnfld avatar byrnhdf avatar christopherhogan avatar dependabot[bot] avatar derobins avatar e4t avatar fbaker avatar fortnern avatar gheber avatar glennsong09 avatar gnuoyd avatar hyoklee avatar jake-smith-hdfgroup avatar jhendersonhdf avatar jrmainzer avatar jwsblokland avatar lrknox avatar mattjala avatar matzke1 avatar mike-mcgreevy avatar mkitti avatar qkoziol avatar rawarren avatar raylu-hdf avatar seanm avatar soumagne avatar vchoi-hdfgroup avatar wkliao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hdf5's Issues

In C, double underscores should not be used at the start of an identifier

I notice there were some changes recently like:

cba993c

- static void H5_debug_mask(const char*);
+ static void H5__debug_mask(const char*);

Where some things got renamed to have double underscores.

However, in C, all names containing a double underscore are reserved:

https://wiki.sei.cmu.edu/confluence/display/cplusplus/DCL51-CPP.+Do+not+declare+or+define+a+reserved+identifier

These days clang can warn about that.

So really that change above is kinda backwards. :)

dsets test reading past end of buffer (from PR #3)

So I can confirm that two are fixed, and H5TEST-dsets is not. I debugged it a bit:

H5MM_memcpy is reading past the end of a 32 byte buffer because it's being told to read 256 bytes. The buffer is:

hdf5/test/dsets.c

Line 12185 in 52ac746

const int buffer[8] = {0, 1, 2, 3, 4, 5, 6, 7};

const int buffer[8] = {0, 1, 2, 3, 4, 5, 6, 7};

The backtrace is:

frame #5: 0x0000000101490c5c libhdf5_debug.1000.dylib`H5MM_memcpy(dest=0x00006120004816c8, src=0x00000001000f7120, n=256) at H5MM.c:606:11
frame #6: 0x00000001026ddbd1 libhdf5_debug.1000.dylib`H5VM_memcpyvv(_dst=0x00006120004816c8, dst_max_nseq=1, dst_curr_seq=0x00007ffeefbfbae0, dst_len_arr=0x0000625000005108, dst_off_arr=0x0000625000007908, _src=0x00000001000f7120, src_max_nseq=1, src_curr_seq=0x00007ffeefbfbac0, src_len_arr=0x0000625000000108, src_off_arr=0x0000625000002908) at H5VM.c:1625:13
frame #7: 0x00000001009c0b83 libhdf5_debug.1000.dylib`H5D__compact_writevv(io_info=0x00007ffeefbfc570, dset_max_nseq=1, dset_curr_seq=0x00007ffeefbfbae0, dset_size_arr=0x0000625000005108, dset_offset_arr=0x0000625000007908, mem_max_nseq=1, mem_curr_seq=0x00007ffeefbfbac0, mem_size_arr=0x0000625000000108, mem_offset_arr=0x0000625000002908) at H5Dcompact.c:314:22
frame #8: 0x0000000100b1634a libhdf5_debug.1000.dylib`H5D__select_io(io_info=0x00007ffeefbfc570, elmt_size=4, nelmts=64, file_space=0x0000613000b5fe80, mem_space=0x000061300053afc0) at H5Dselect.c:221:37
frame #9: 0x0000000100b172b3 libhdf5_debug.1000.dylib`H5D__select_write(io_info=0x00007ffeefbfc570, type_info=0x00007ffeefbfd110, nelmts=64, file_space=0x0000613000b5fe80, mem_space=0x000061300053afc0) at H5Dselect.c:310:9
frame #10: 0x00000001008fdb49 libhdf5_debug.1000.dylib`H5D__chunk_write(io_info=0x00007ffeefbfd020, type_info=0x00007ffeefbfd110, nelmts=64, file_space=0x000061300053afc0, mem_space=0x000061300053afc0, fm=0x0000620000000080) at H5Dchunk.c:2730:13
frame #11: 0x0000000100ae05be libhdf5_debug.1000.dylib`H5D__write(dataset=0x0000606000013580, mem_type_id=216172782113784762, mem_space=0x000061300053afc0, file_space=0x000061300053afc0, buf=0x00000001000f7120) at H5Dio.c:531:9
frame #12: 0x000000010265d946 libhdf5_debug.1000.dylib`H5VL__native_dataset_write(obj=0x0000606000013580, mem_type_id=216172782113784762, mem_space_id=0, file_space_id=0, dxpl_id=792633534417207304, buf=0x00000001000f7120, req=0x0000000000000000) at H5VLnative_dataset.c:206:9
frame #13: 0x00000001025aebea libhdf5_debug.1000.dylib`H5VL__dataset_write(obj=0x0000606000013580, cls=0x0000616000000980, mem_type_id=216172782113784762, mem_space_id=0, file_space_id=0, dxpl_id=792633534417207304, buf=0x00000001000f7120, req=0x0000000000000000) at H5VLcallback.c:2082:9
frame #14: 0x00000001025ae050 libhdf5_debug.1000.dylib`H5VL_dataset_write(vol_obj=0x0000603000006640, mem_type_id=216172782113784762, mem_space_id=0, file_space_id=0, dxpl_id=792633534417207304, buf=0x00000001000f7120, req=0x0000000000000000) at H5VLcallback.c:2114:9
frame #15: 0x000000010088d551 libhdf5_debug.1000.dylib`H5D__write_api_common(dset_id=360287970189640115, mem_type_id=216172782113784762, mem_space_id=0, file_space_id=0, dxpl_id=792633534417207304, buf=0x00000001000f7120, token_ptr=0x0000000000000000, _vol_obj_ptr=0x0000000000000000) at H5D.c:1105:9
frame #16: 0x000000010088c561 libhdf5_debug.1000.dylib`H5Dwrite(dset_id=360287970189640115, mem_type_id=216172782113784762, mem_space_id=0, file_space_id=0, dxpl_id=0, buf=0x00000001000f7120) at H5D.c:1154:9
frame #17: 0x00000001000b856b dsets`test_bt2_hdr_fd(env_h5_driver="nomatch", fapl=792633534417208475) at dsets.c:12488:9
frame #18: 0x0000000100007390 dsets`main at dsets.c:15270:29
(lldb) fr sel 6
frame #6: 0x00000001026ddbd1 libhdf5_debug.1000.dylib`H5VM_memcpyvv(_dst=0x00006120004816c8, dst_max_nseq=1, dst_curr_seq=0x00007ffeefbfbae0, dst_len_arr=0x0000625000005108, dst_off_arr=0x0000625000007908, _src=0x00000001000f7120, src_max_nseq=1, src_curr_seq=0x00007ffeefbfbac0, src_len_arr=0x0000625000000108, src_off_arr=0x0000625000002908) at H5VM.c:1625:13
   1622	        acc_len = 0;
   1623	        do {
   1624	            /* Copy data */
-> 1625	            H5MM_memcpy(dst, src, tmp_dst_len);
   1626	
   1627	            /* Accumulate number of bytes copied */
   1628	            acc_len += tmp_dst_len;

(lldb) p tmp_dst_len
(size_t) $11 = 256

(lldb) p tmp_src_len
(size_t) $12 = 256

I gave up at that point, but hopefully it gives you a head start.

Support 'Universal Binaries' on macOS

As when Macs previously changed from PowerPC to Intel, Mac developers these days again need to support 'Universal Binaries', which are basically libraries/executables that contain both Intel and ARM code.

Back in the PowerPC+Intel days, building HDF5 as Universal Binaries worked fine, for me at least.

But now the current code in the develop branch, and presumably other branches, contains in config/cmake_ext_mod/ConfigureChecks.cmake this:

if (APPLE)
  list (LENGTH CMAKE_OSX_ARCHITECTURES ARCH_LENGTH)
  if (ARCH_LENGTH GREATER 1)
    set (CMAKE_OSX_ARCHITECTURES "" CACHE STRING "" FORCE)
    message (FATAL_ERROR "Building Universal Binaries on OS X is NOT supported by the HDF5 project. This is"
    "due to technical reasons. The best approach would be build each architecture in separate directories"
    "and use the 'lipo' tool to combine them into a single executable or library. The 'CMAKE_OSX_ARCHITECTURES'"
    "variable has been set to a blank value which will build the default architecture for this system.")
  endif ()
  set (${HDF_PREFIX}_AC_APPLE_UNIVERSAL_BUILD 0)
endif ()

It's here: https://github.com/HDFGroup/hdf5/blob/develop/config/cmake_ext_mod/ConfigureChecks.cmake#L28

This is deliberately thwarting Universal builds, for some 'technical reasons' according to the text... but what are the reasons? How can we overcome them to get universal builds working?

See also:
https://forum.hdfgroup.org/t/apple-silicon-binaries-universal-binaries/7848

Thanks.

How are links saved with order tracking enabled?

When creating an HDF5 file with tracking order enabled on the root and more than 8 top-level groups, I do not see any top-level links when iteration over the data objects starting from the superblock. How are links expected to be saved? When I go from 8 to 9 top-level HDF5 groups, the file structure seems to change drastically.

FYI I'm doing this in an attempt to reconstruct a corrupt HDF5 file (the disk got full during writing which corrupts the file apparently). The hdf5-devel utilities do not seem to contain a utility that does this.

See jjhelmus/pyfive#46 for versions and example.

Can't compile using AOCC Compilers

I'm trying to compile the latest stable release (1.12.0) on a Linux machine, using AMD Optimizing C/C++ Compilers, but I get errors when running make.

The configure step completes succesfully (I have zlib compiled with AOCC and installed to the directory pointed by the PREFIX variable):

./configure --prefix=$PREFIX --with-zlib=$PREFIX --enable-fortran

But when running make, I get the following error:

  FCLD     libhdf5_fortran.la
clang-11: error: unknown argument: '-soname'
clang-11: error: no such file or directory: 'libhdf5_fortran.so.200'
make[3]: *** [Makefile:963: libhdf5_fortran.la] Error 1
make[3]: Leaving directory '/shared/home/enardi/src/hdf5-1.12.0/fortran/src'
make[2]: *** [Makefile:877: all] Error 2
make[2]: Leaving directory '/shared/home/enardi/src/hdf5-1.12.0/fortran/src'
make[1]: *** [Makefile:828: all-recursive] Error 1
make[1]: Leaving directory '/shared/home/enardi/src/hdf5-1.12.0/fortran'
make: *** [Makefile:662: all-recursive] Error 1

too much API breakage - mandatory use of semantic versioning

HDF5 v1.12 has changed so much since the initial release of HDF5 that it's more appropriate to call it HDF6 at this stage.

The breaking API changes between minor HDF5 releases (eg. 1.8, 1.10, 1.12) are rather disconcerting. While there are various pre-processor tricks used as "workarounds", they are just brittle hacks in the end. Stuff like H5_USE_110_API is just papering over the unwarranted breakage.

The point of HDF is to be a boring abstraction for storage functionality, where by "boring" I mean stable. HDF5 is definitely not boring, which decreases its value and utility.

The use of semantic versioning to label HDF releases would help in this regard immensely. Minor releases such as v1.12 should not be introducing any kind of API breaks. Such breaks are much better communicated by bumping the major version number, for example HDF6, HDF7, etc.

The current approach of API breakage between minor releases is causing all sorts of issues. Examples:

TLDR: I would suggest that the HDF5 developers be made explicitly aware that public API breaks in HDF5 releases are not exactly conducive to productivity. It would be useful to properly communicate such changes via an increase in the major version number, for example HDF6. The current practice of updating "HDF5" with incompatible changes is overall a negative net contribution.

Do not use /bin/mv in configure.ac

Believe it or not, there are systems where mv is not located at /bin/mv. For example:

gerd@guix ~/scratch/hdf5 [env]$ which mv
/run/current-system/profile/bin/mv

In that case, the references to /bin/mv on lines 98 and 103 in configure.ac are show stoppers.

hdf5-config.cmake is located in wrong directory (at least on Windows 10)

Hi,

In my project wich externally builds HDF5 when I try to run find_package(hdf5) I get the error:
CMake Error at C:/S/extensions/Proba_fixed/Proba-build/hdf5-build/hdf5-config.cmake:25 (message):
File or directory C:/S/extensions/Proba_fixed/bin referenced by variable HDF5_TOOLS_DIR does not exist !

If I move hdf5-config.cmake to C:/.../hdf5-build/src/CMakeFiles the commands:

find_package(hdf5 REQUIRED CONFIG PATHS "C:/S/extensions/Proba_fixed/Proba-build/hdf5-build/src/CMakeFiles")
message(${HDF5_TOOLS_DIR})

give me C:/S/extensions/Proba_fixed/Proba-build/bin as it should be.

But with this I get another error:
CMake Error at C:/S/extensions/Proba_fixed/Proba-build/hdf5-build/src/CMakeFiles/hdf5-config.cmake:144 (include):
include could not find load file:
C:/S/extensions/Proba_fixed/Proba-build/share/cmake/hdf5/hdf5-targets.cmake

So I think hdf5-config.cmake should be two folders down then it is now

Windows 10 x64, HDF5 develop branch and hdf5-1_12_0 git tag, CMake hdf5-config.cmake

Deadlock in H5FD__mpio_open due to bad MPI_File_get_size

While debugging another issue unrelated to HDF5, I think I tripped up a deadlock condition. In my test, it seems the MPI_File_get_size call returns an error here:

hdf5/src/H5FDmpio.c

Lines 828 to 832 in c56464f

/* Only processor p0 will get the filesize and broadcast it. */
if (mpi_rank == 0) {
if (MPI_SUCCESS != (mpi_code = MPI_File_get_size(fh, &size)))
HMPI_GOTO_ERROR(NULL, "MPI_File_get_size failed", mpi_code)
} /* end if */

That causes rank 0 to jump to the bottom of the function to close the file, which is a collective:

hdf5/src/H5FDmpio.c

Lines 858 to 861 in c56464f

done:
if (ret_value == NULL) {
if (file_opened)
MPI_File_close(&fh);

While other ranks are stuck in a different collective here:

if (MPI_SUCCESS != (mpi_code = MPI_Bcast(&size, (int)sizeof(MPI_Offset), MPI_BYTE, 0, comm)))

Also, I noticed that there may be other patterns like this while grepping the file to look for this location, like maybe:

hdf5/src/H5FDmpio.c

Lines 1223 to 1229 in c56464f

/* Read on rank 0 Bcast to other ranks */
if (file->mpi_rank == 0)
if (MPI_SUCCESS !=
(mpi_code = MPI_File_read_at(file->f, mpi_off, buf, size_i, buf_type, &mpi_stat)))
HMPI_GOTO_ERROR(FAIL, "MPI_File_read_at failed", mpi_code)
if (MPI_SUCCESS != (mpi_code = MPI_Bcast(buf, size_i, buf_type, 0, file->comm)))
HMPI_GOTO_ERROR(FAIL, "MPI_Bcast failed", mpi_code)

Building problems by using add_subdirectory

It is not possible to build against hdf5 when the lib is added with add_subdirectory. The header file H5pubconf.h can not be found the include_dirs.
All generated header files with public visibility should be configured into an extra directory like ${CMAKE_CURRENT_BUILD_DIR}/generated/h5 and this dir should be included with

target_include_directories(mylib PUBLIC
  $<BUILD_INTERFACE:${CMAKE_CURRENT_BUILD_DIR}/generated>
  $<INSTALL_INTERFACE:include/mylib>  # <prefix>/include/mylib
)

Merge changes from PRs #425, 427, 428 to support branches.

Merge needed to hdf5_1_12, hdf5_1_10, and hdf5_1_8 as appropriate for PRs #425, 427, 428 after passing all test configurations in daily test cycle

#425 Fixes various warnings noticed on Windows
#427 Fixed clang-tidy readability-misleading-indentation warnings
#428 Fixed clang-tidy readability-redundant-control-flow warnings

H5LT lexer and parser share state with global variables when there is no need.

The lexer uses an unconventional strategy for parsing lexical categories NUMBER (decimal numbers) and STRING (double-quoted strings) that involves sharing the parse context with the lexer using global variables. There are a couple of problems with that. First, the lexer is too complicated for the simple tokenization it performs—it's hard to tell if it is correct. Second, as @seanm points out, the shared global variables spill into the namespace shared by other libraries and application programs—e.g., VTK.

concatenate

Is there any chance of adding a concatenate function that would allow one to append new hdf5s to an existing hdf5? SO to take all the hdf5s in a folder and merge them?

Add pkg-config Support to Autotools Build

As per Homebrew/homebrew-core#61490, I am trying to build a Linux and macOS cross-platform library that uses the HDF5 library. The compiler I'm using attempts to find the headers and libraries on both platforms via a pkg-config file. However, currently I can only find the hdf5.pc file on the Ubuntu apt installation of hdf5 and not in the macOS build of hdf5.

sudo apt install libhdf5-dev on Ubuntu 18.04 installs HDF5 1.10.0, while brew install installs 1.12.0. It appears the difference is that the macOS Homebrew build is using autotools rather than cmake for the build and currently HDF5 only produces the pkg-config for the cmake build. Could the HDF5 project potentially include a pkg-config output in its autotools build as well?

1.10.7 - Pkgconfig on cmake build incomplete (RHEL 7.8 (Toss3))

Here are the files in $prefix/lib/pkgconfig when building with cmake:

~ ls lib/pkgconfig
hdf5-1.10.7.pc  hdf5_fortran-1.10.7.pc  hdf5_hl-1.10.7.pc

It's missing the fortran_hl config file. Also the pc files themselves point at a pc file that isn't there:

~ cat lib/pkgconfig/hdf5_hl-1.10.7.pc
prefix=/usr/projects/hpcsoft/toss3/kodiak/hdf5/1.10.7_gcc-9.3.0_openmpi-3.1.6
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include

Name: hdf5_hl
Description: HDF5 (Hierarchical Data Format 5) Software Library
Version: 1.10.7

Cflags: -I${includedir}
Libs: -L${libdir}  -lhdf5_hl
Requires: hdf5
Libs.private:   -lhdf5_hl
Requires.private: hdf5
~
~ cat lib/pkgconfig/hdf5_fortran-1.10.7.pc
prefix=/usr/projects/hpcsoft/toss3/kodiak/hdf5/1.10.7_gcc-9.3.0_openmpi-3.1.6
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include

Name: hdf5_fortran
Description: HDF5 (Hierarchical Data Format 5) Software Library
Version: 1.10.7

Cflags: -I${includedir}
Libs: -L${libdir}  -lhdf5_fortran
Requires: hdf5
Libs.private:   -lhdf5_fortran
Requires.private: hdf5

There's no hdf5.pc file so the compiler wrappers h5fc and h5hlcc are broken.

Also there is no h5hlfc wrapper in bin which is confusing considering there are fortran hl libraries and h5hlcc exists in bin.

Cannot open include file: 'H5version.h': No such file or directory (when building dev branches)

Hi,

When I try to build dev branch I get the following error:
C:\S\extensions\Proba_fixed\Proba-build\hdf5\src\H5public.h(32,10): fatal error C1083: Cannot open include file: 'H5version.h': No such file or directory [C:\S\extensions\Proba_fixed\Proba-build\hdf5-build\src\H5detect.vcxproj] [C:\S\extensions\Proba_fixed\Proba-build\hdf5.vcxproj]
Building Custom Rule C:/S/extensions/Proba_fixed/Proba-build/hdf5/src/CMakeLists.txt
H5make_libsettings.c

I cant understand why get this error but I get it only when I build HDF5 dev branches.

My configuration is Windows 10 x64, CMake 3.19.0, MSVC 2019 v142 x64,

Here is the output from CMake:
The C compiler identification is MSVC 19.28.29334.0
Detecting C compiler ABI info
Detecting C compiler ABI info - done
Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.28.29333/bin/Hostx64/x64/cl.exe - skipped
Detecting C compile features
Detecting C compile features - done
SOVERSION: 1000.0.0
SOVERSION_TOOLS: 1000.0.0
SOVERSION_CXX: 1000.0.0
SOVERSION_F: 1000.0.0
SOVERSION_HL: 1000.0.0
SOVERSION_HL_CXX: 1000.0.0
SOVERSION_HL_F: 1000.0.0
SOVERSION_JAVA: 1000.0.0
Looking for include file sys/file.h
Looking for include file sys/file.h - not found
Looking for include file sys/ioctl.h
Looking for include file sys/ioctl.h - not found
Looking for include file sys/resource.h
Looking for include file sys/resource.h - not found
Looking for include file sys/socket.h
Looking for include file sys/socket.h - not found
Looking for include file sys/stat.h
Looking for include file sys/stat.h - found
Looking for include files sys/stat.h, sys/time.h
Looking for include files sys/stat.h, sys/time.h - not found
Looking for include files sys/stat.h, sys/types.h
Looking for include files sys/stat.h, sys/types.h - found
Looking for 3 include files sys/stat.h, ..., features.h
Looking for 3 include files sys/stat.h, ..., features.h - not found
Looking for 3 include files sys/stat.h, ..., dirent.h
Looking for 3 include files sys/stat.h, ..., dirent.h - not found
Looking for 3 include files sys/stat.h, ..., setjmp.h
Looking for 3 include files sys/stat.h, ..., setjmp.h - found
Looking for 4 include files sys/stat.h, ..., stddef.h
Looking for 4 include files sys/stat.h, ..., stddef.h - found
Looking for 5 include files sys/stat.h, ..., stdint.h
Looking for 5 include files sys/stat.h, ..., stdint.h - found
Looking for 6 include files sys/stat.h, ..., unistd.h
Looking for 6 include files sys/stat.h, ..., unistd.h - not found
Looking for 6 include files sys/stat.h, ..., io.h
Looking for 6 include files sys/stat.h, ..., io.h - found
Looking for 7 include files sys/stat.h, ..., winsock2.h
Looking for 7 include files sys/stat.h, ..., winsock2.h - found
Looking for 8 include files sys/stat.h, ..., globus/common.h
Looking for 8 include files sys/stat.h, ..., globus/common.h - not found
Looking for 8 include files sys/stat.h, ..., pdb.h
Looking for 8 include files sys/stat.h, ..., pdb.h - not found
Looking for 8 include files sys/stat.h, ..., pthread.h
Looking for 8 include files sys/stat.h, ..., pthread.h - not found
Looking for 8 include files sys/stat.h, ..., srbclient.h
Looking for 8 include files sys/stat.h, ..., srbclient.h - not found
Looking for 8 include files sys/stat.h, ..., string.h
Looking for 8 include files sys/stat.h, ..., string.h - found
Looking for 9 include files sys/stat.h, ..., strings.h
Looking for 9 include files sys/stat.h, ..., strings.h - not found
Looking for 9 include files sys/stat.h, ..., stdlib.h
Looking for 9 include files sys/stat.h, ..., stdlib.h - found
Looking for 10 include files sys/stat.h, ..., memory.h
Looking for 10 include files sys/stat.h, ..., memory.h - found
Looking for 11 include files sys/stat.h, ..., dlfcn.h
Looking for 11 include files sys/stat.h, ..., dlfcn.h - not found
Looking for 11 include files sys/stat.h, ..., inttypes.h
Looking for 11 include files sys/stat.h, ..., inttypes.h - found
Looking for 12 include files sys/stat.h, ..., netinet/in.h
Looking for 12 include files sys/stat.h, ..., netinet/in.h - not found
Looking for 12 include files sys/stat.h, ..., netdb.h
Looking for 12 include files sys/stat.h, ..., netdb.h - not found
Looking for 12 include files sys/stat.h, ..., arpa/inet.h
Looking for 12 include files sys/stat.h, ..., arpa/inet.h - not found
Looking for 12 include files sys/stat.h, ..., stdbool.h
Looking for 12 include files sys/stat.h, ..., stdbool.h - found
Looking for include file quadmath.h
Looking for include file quadmath.h - not found
Looking for gethostname in ucb;
Looking for gethostname in ucb; - not found
Performing Other Test STDC_HEADERS - Success
Looking for sys/types.h
Looking for sys/types.h - found
Looking for stdint.h
Looking for stdint.h - found
Looking for stddef.h
Looking for stddef.h - found
Check size of char
Check size of char - done
Check size of short
Check size of short - done
Check size of int
Check size of int - done
Check size of unsigned
Check size of unsigned - done
Check size of long
Check size of long - done
Check size of long long
Check size of long long - done
Check size of __int64
Check size of __int64 - done
Check size of float
Check size of float - done
Check size of double
Check size of double - done
Check size of long double
Check size of long double - done
Check size of int8_t
Check size of int8_t - done
Check size of uint8_t
Check size of uint8_t - done
Check size of int_least8_t
Check size of int_least8_t - done
Check size of uint_least8_t
Check size of uint_least8_t - done
Check size of int_fast8_t
Check size of int_fast8_t - done
Check size of uint_fast8_t
Check size of uint_fast8_t - done
Check size of int16_t
Check size of int16_t - done
Check size of uint16_t
Check size of uint16_t - done
Check size of int_least16_t
Check size of int_least16_t - done
Check size of uint_least16_t
Check size of uint_least16_t - done
Check size of int_fast16_t
Check size of int_fast16_t - done
Check size of uint_fast16_t
Check size of uint_fast16_t - done
Check size of int32_t
Check size of int32_t - done
Check size of uint32_t
Check size of uint32_t - done
Check size of int_least32_t
Check size of int_least32_t - done
Check size of uint_least32_t
Check size of uint_least32_t - done
Check size of int_fast32_t
Check size of int_fast32_t - done
Check size of uint_fast32_t
Check size of uint_fast32_t - done
Check size of int64_t
Check size of int64_t - done
Check size of uint64_t
Check size of uint64_t - done
Check size of int_least64_t
Check size of int_least64_t - done
Check size of uint_least64_t
Check size of uint_least64_t - done
Check size of int_fast64_t
Check size of int_fast64_t - done
Check size of uint_fast64_t
Check size of uint_fast64_t - done
Check size of size_t
Check size of size_t - done
Check size of ssize_t
Check size of ssize_t - failed
Check size of off_t
Check size of off_t - done
Check size of off64_t
Check size of off64_t - failed
Check size of time_t
Check size of time_t - done
Check size of Bool
Check size of Bool - done
Looking for alarm
Looking for alarm - not found
Looking for fcntl
Looking for fcntl - not found
Looking for flock
Looking for flock - not found
Looking for fork
Looking for fork - not found
Looking for frexpf
Looking for frexpf - not found
Looking for frexpl
Looking for frexpl - not found
Looking for getrusage
Looking for getrusage - not found
Looking for llround
Looking for llround - found
Looking for llroundf
Looking for llroundf - found
Looking for lround
Looking for lround - found
Looking for lroundf
Looking for lroundf - found
Looking for lstat
Looking for lstat - not found
Looking for pread
Looking for pread - not found
Looking for pwrite
Looking for pwrite - not found
Looking for rand_r
Looking for rand_r - not found
Looking for random
Looking for random - not found
Looking for round
Looking for round - found
Looking for roundf
Looking for roundf - found
Looking for setsysinfo
Looking for setsysinfo - not found
Looking for signal
Looking for signal - found
Looking for setjmp
Looking for setjmp - found
Looking for siglongjmp
Looking for siglongjmp - not found
Looking for sigsetjmp
Looking for sigsetjmp - not found
Looking for sigprocmask
Looking for sigprocmask - not found
Looking for snprintf
Looking for snprintf - not found
Looking for srandom
Looking for srandom - not found
Looking for strtoll
Looking for strtoll - found
Looking for strtoull
Looking for strtoull - found
Looking for symlink
Looking for symlink - not found
Looking for tmpfile
Looking for tmpfile - found
Looking for asprintf
Looking for asprintf - not found
Looking for vasprintf
Looking for vasprintf - not found
Looking for waitpid
Looking for waitpid - not found
Looking for vsnprintf
Looking for vsnprintf - not found
Looking for sigsetjmp
Looking for sigsetjmp - not found
Checking for InitOnceExecuteOnce:
Performing Test InitOnceExecuteOnce - Success
Performing Other Test HAVE_INLINE - Success
Performing Other Test HAVE___INLINE
- Failed
Performing Other Test HAVE___INLINE - Success
Checking for appropriate format for 64 bit long:
Checking for appropriate format for 64 bit long: found "I64"
Looking for difftime
Looking for difftime - found
Check size of __float128
Check size of __float128 - failed
Check size of _Quad
Check size of _Quad - failed
Checking IF your system converts long double to (unsigned) long values with special algorithm... no
Checking IF your system can convert (unsigned) long to long double values with special algorithm... no
Checking IF correctly converting long double to (unsigned) long long values... yes
Checking IF correctly converting (unsigned) long long to long double values... yes
Checking IF the cpu is power9 and cannot correctly converting long double values... no
Checking IF alignment restrictions are strictly enforced... yes
Warnings Configuration: default: /DWIN32 /D_WINDOWS /W3 :
Could NOT find Perl (missing: PERL_EXECUTABLE)
Cannot generate headers - perl not found
Cannot execute TEST flushrefresh - perl not found
The CXX compiler identification is MSVC 19.28.29334.0
Detecting CXX compiler ABI info
Detecting CXX compiler ABI info - done
Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.28.29333/bin/Hostx64/x64/cl.exe - skipped
Detecting CXX compile features
Detecting CXX compile features - done
Check for STD namespace
Check for STD namespace - found
Looking for C++ include stdint.h
Looking for C++ include stdint.h - found
Performing CXX Test OLD_HEADER_FILENAME - Failed
Performing CXX Test HDF_NO_NAMESPACE - Failed
Performing CXX Test HDF_NO_STD - Failed
Performing CXX Test BOOL_NOTDEFINED - Failed
Performing CXX Test NO_STATIC_CAST - Failed
Performing CXX Test CXX_HAVE_OFFSETOF - Failed
Warnings Configuration: CXX default:
Configuring done
Generating done

Continuous fuzzing through OSS-fuzz

I have been working on setting up continuous fuzzing of hdf5 by way of Libfuzzer and OSS-fuzz.

Fuzzing of hdf5 has been discussed here: #263 (comment) and here: #272.

Tagging @schwehr since you suggested integrating with OSS-fuzz.

I have set up a draft integration over at OSS-fuzz here: google/oss-fuzz#5103.
Feel free to leave any comments over there.

It is free to integrate with OSS-fuzz, and it is expected that bugs are fixed, so that the resources spent on fuzzing hdf5 go towards resolving bugs in the code base.

To finish the integration, at least one maintainers email is needed for bug reports.

I recommend moving the fuzzer and the build script upstream to easier maintain them.

Crash when writing parallel compressed chunks

Follow up of https://forum.hdfgroup.org/t/crash-when-writing-parallel-compressed-chunks/6186

I found that the following test still fails with the current develop branch:

#include <stdlib.h>
#include <stdio.h>

#include "hdf5.h"


#define _MPI
#define _DSET2
//#define _COMPRESS

#define NPROC 4
#define CHUNK0 4

// Equivalent to original gist
// Works on 1.10.5 with patch, crashes on 1.10.5 vanilla and hangs on 1.10.6
//#define CHUNK1 32768
//#define NCHUNK1 32

// Works on 1.10.5 with and without patch and 1.10.6
//#define CHUNK1 256
//#define NCHUNK1 8192

// Works on 1.10.5 with and without patch and 1.10.6
//#define CHUNK1 512
//#define NCHUNK1 8192

// Crashes on 1.10.5 with and without patch, 1.10.6 and 1.12.0
#define CHUNK1 256
#define NCHUNK1 16384

int main(int argc, char **argv) {

    int mpi_size = 1, mpi_rank = 0;
    hid_t fapl_id, file_id, dset_space, dcpl_id, ds, ds2, propid, mem_dspace, sel_dspace;

    fapl_id = H5Pcreate(H5P_FILE_ACCESS);
#ifdef _MPI
    MPI_Comm comm  = MPI_COMM_WORLD;
    MPI_Info info  = MPI_INFO_NULL;

    MPI_Init(&argc, &argv);
    MPI_Comm_size(comm, &mpi_size);
    MPI_Comm_rank(comm, &mpi_rank);

    H5Pset_fapl_mpio(fapl_id, comm, info);
#endif

    printf("MPI rank [%i/%i]\n", mpi_rank, mpi_size);

    printf("rank=%i creating file\n", mpi_rank);
    file_id = H5Fcreate("test1.h5", H5F_ACC_TRUNC, H5P_DEFAULT, fapl_id);
    H5Pclose(fapl_id);

    // Define total dataset size
    hsize_t dset_dims[2] = {NPROC * CHUNK0, CHUNK1 * NCHUNK1};
    dset_space = H5Screate_simple (2, dset_dims, NULL);
    dcpl_id = H5Pcreate(H5P_DATASET_CREATE);

    // Set chunking and compression params
    hsize_t chunk_dims[2] = {CHUNK0, CHUNK1};
    H5Pset_chunk(dcpl_id, 2, chunk_dims);

#ifdef _COMPRESS
    H5Pset_deflate(dcpl_id, 9);
#endif


    // Define selection
    hsize_t sel_dims[2] = {CHUNK0, CHUNK1 * NCHUNK1};
    sel_dspace = H5Screate_simple(2, dset_dims, NULL);
    hsize_t offset[2] = {mpi_rank * sel_dims[0], 0};
    printf("rank=%i creating selection [%llu:%llu, %llu:%llu]\n",
           mpi_rank, offset[0], offset[0] + sel_dims[0], offset[1], offset[1] + sel_dims[1]);
    H5Sselect_hyperslab(sel_dspace, H5S_SELECT_SET,
                        offset, NULL, sel_dims, NULL);

    // Set the dspace for the input data
    mem_dspace = H5Screate_simple (2, sel_dims, NULL);

    propid = H5Pcreate(H5P_DATASET_XFER);
#ifdef _MPI
    H5Pset_dxpl_mpio(propid, H5FD_MPIO_COLLECTIVE);
#endif

    // Create the dataset
    printf("rank=%i creating dataset1\n", mpi_rank);
    ds = H5Dcreate (file_id, "dset1", H5T_NATIVE_FLOAT, dset_space, H5P_DEFAULT, dcpl_id, H5P_DEFAULT);

    // Create array of data to write
    // Initialise with random data.
    int totalsize = sel_dims[0] * sel_dims[1];
    float *data = (float *)malloc(sizeof(float) * totalsize);
    for(int i = 0; i < totalsize; i++) {
        data[i] = (float)drand48();
    }

    printf("rank=%i writing dataset1\n", mpi_rank);
    H5Dwrite(ds, H5T_NATIVE_FLOAT, mem_dspace, sel_dspace, propid, data);
    printf("rank=%i finished writing dataset1\n", mpi_rank);

#ifdef _DSET2
    // Create the dataset
    printf("rank=%i creating dataset2\n", mpi_rank);
    ds2 = H5Dcreate (file_id, "dset2", H5T_NATIVE_FLOAT, dset_space, H5P_DEFAULT, dcpl_id, H5P_DEFAULT);
    
    // Generate new data and write
    printf("rank=%i writing dataset2\n", mpi_rank);
    for(int i = 0; i < totalsize; i++) {
        data[i] = drand48();
    }
    H5Dwrite(ds2, H5T_NATIVE_FLOAT, mem_dspace, sel_dspace, propid, data);
    
    H5Dclose(ds2);
#endif

    // Close down everything
    printf("rank=%i closing everything\n", mpi_rank);
    H5Dclose(ds);
    H5Sclose(dset_space);
    H5Sclose(sel_dspace);
    H5Sclose(mem_dspace);
    H5Pclose(dcpl_id);
    H5Pclose(propid);
    H5Fclose(file_id);

    free(data);

    MPI_Finalize();
    return 0;
 }
$ /usr/bin/mpiexec -n 4 --mca io romio321 build/bin/chunk_compress
MPI rank [0/4]
rank=0 creating file
MPI rank [1/4]
rank=1 creating file
MPI rank [2/4]
rank=2 creating file
MPI rank [3/4]
rank=3 creating file
rank=2 creating selection [8:12, 0:4194304]
rank=3 creating selection [12:16, 0:4194304]
rank=3 creating dataset1
rank=0 creating selection [0:4, 0:4194304]
rank=0 creating dataset1
rank=1 creating selection [4:8, 0:4194304]
rank=1 creating dataset1
rank=2 creating dataset1
rank=1 writing dataset1
rank=3 writing dataset1
rank=0 writing dataset1
rank=2 writing dataset1
rank=2 finished writing dataset1
rank=2 creating dataset2
rank=1 finished writing dataset1
rank=1 creating dataset2
rank=0 finished writing dataset1
rank=0 creating dataset2
rank=3 finished writing dataset1
rank=3 creating dataset2
HDF5-DIAG: Error detected in HDF5 (1.13.0) MPI-process 2:
  #000: ../src/H5D.c line 189 in H5Dcreate2(): unable to synchronously create dataset
    major: Dataset
    minor: Unable to create file
  #001: ../src/H5D.c line 137 in H5D__create_api_common(): unable to create dataset
    major: Dataset
    minor: Unable to create file
  #002: ../src/H5VLcallback.c line 1809 in H5VL_dataset_create(): dataset create failed
    major: Virtual Object Layer
    minor: Unable to create file
  #003: ../src/H5VLcallback.c line 1774 in H5VL__dataset_create(): dataset create failed
    major: Virtual Object Layer
    minor: Unable to create file
  #004: ../src/H5VLnative_dataset.c line 73 in H5VL__native_dataset_create(): unable to create dataset
    major: Dataset
    minor: Unable to initialize object
  #005: ../src/H5Dint.c line 396 in H5D__create_named(): unable to create and link to dataset
    major: Dataset
    minor: Unable to initialize object
  #006: ../src/H5L.c line 2359 in H5L_link_object(): unable to create new link to object
    major: Links
    minor: Unable to initialize object
  #007: ../src/H5L.c line 2601 in H5L__create_real(): can't insert link
    major: Links
    minor: Unable to insert object
  #008: ../src/H5Gtraverse.c line 838 in H5G_traverse(): internal path traversal failed
    major: Symbol table
    minor: Object not found
  #009: ../src/H5Gtraverse.c line 569 in H5G__traverse_real(): can't look up component
    major: Symbol table
    minor: Object not found
  #010: ../src/H5Gobj.c line 1097 in H5G__obj_lookup(): can't check for link info message
    major: Symbol table
    minor: Can't get value
  #011: ../src/H5Gobj.c line 306 in H5G__obj_get_linfo(): unable to read object header
    major: Symbol table
    minor: Can't get value
  #012: ../src/H5Omessage.c line 845 in H5O_msg_exists(): unable to protect object header
    major: Object header
    minor: Unable to protect metadata
  #013: ../src/H5Oint.c line 1048 in H5O_protect(): unable to load object header
    major: Object header
    minor: Unable to protect metadata
  #014: ../src/H5AC.c line 1431 in H5AC_protect(): H5C_protect() failed
    major: Object cache
    minor: Unable to protect metadata
  #015: ../src/H5C.c line 2341 in H5C_protect(): MPI_Bcast failed
    major: Internal error (too specific to document in detail)
    minor: Some MPI function failed
  #016: ../src/H5C.c line 2341 in H5C_protect(): MPI_ERR_TRUNCATE: message truncated
    major: Internal error (too specific to document in detail)
    minor: MPI Error String
rank=2 writing dataset2
HDF5-DIAG: Error detected in HDF5 (1.13.0) MPI-process 3:
  #000: ../src/H5D.c line 189 in H5Dcreate2(): unable to synchronously create dataset
    major: Dataset
    minor: Unable to create file
  #001: ../src/H5D.c line 137 in H5D__create_api_common(): unable to create dataset
    major: Dataset
    minor: Unable to create file
  #002: ../src/H5VLcallback.c line 1809 in H5VL_dataset_create(): dataset create failed
    major: Virtual Object Layer
    minor: Unable to create file
  #003: ../src/H5VLcallback.c line 1774 in H5VL__dataset_create(): dataset create failed
    major: Virtual Object Layer
    minor: Unable to create file
  #004: ../src/H5VLnative_dataset.c line 73 in H5VL__native_dataset_create(): unable to create dataset
    major: Dataset
    minor: Unable to initialize object
  #005: ../src/H5Dint.c line 396 in H5D__create_named(): unable to create and link to dataset
    major: Dataset
    minor: Unable to initialize object
  #006: ../src/H5L.c line 2359 in H5L_link_object(): unable to create new link to object
    major: Links
    minor: Unable to initialize object
  #007: ../src/H5L.c line 2601 in H5L__create_real(): can't insert link
    major: Links
    minor: Unable to insert object
  #008: ../src/H5Gtraverse.c line 838 in H5G_traverse(): internal path traversal failed
    major: Symbol table
    minor: Object not found
  #009: ../src/H5Gtraverse.c line 569 in H5G__traverse_real(): can't look up component
    major: Symbol table
    minor: Object not found
  #010: ../src/H5Gobj.c line 1097 in H5G__obj_lookup(): can't check for link info message
    major: Symbol table
    minor: Can't get value
  #011: ../src/H5Gobj.c line 306 in H5G__obj_get_linfo(): unable to read object header
    major: Symbol table
    minor: Can't get value
  #012: ../src/H5Omessage.c line 845 in H5O_msg_exists(): unable to protect object header
    major: Object header
    minor: Unable to protect metadata
  #013: ../src/H5Oint.c line 1048 in H5O_protect(): unable to load object header
    major: Object header
    minor: Unable to protect metadata
  #014: ../src/H5AC.c line 1431 in H5AC_protect(): H5C_protect() failed
    major: Object cache
    minor: Unable to protect metadata
  #015: ../src/H5C.c line 2341 in H5C_protect(): MPI_Bcast failed
    major: Internal error (too specific to document in detail)
    minor: Some MPI function failed
  #016: ../src/H5C.c line 2341 in H5C_protect(): MPI_ERR_TRUNCATE: message truncated
    major: Internal error (too specific to document in detail)
    minor: MPI Error String
rank=3 writing dataset2
HDF5-DIAG: Error detected in HDF5 (1.13.0) MPI-process 3:
  #000: ../src/H5D.c line 1156 in H5Dwrite(): can't synchronously write data
    major: Dataset
    minor: Write failed
  #001: ../src/H5D.c line 1096 in H5D__write_api_common(): dset_id is not a dataset ID
    major: Invalid arguments to routine
    minor: Inappropriate type
HDF5-DIAG: Error detected in HDF5 (1.13.0) MPI-process 3:
  #000: ../src/H5D.c line 472 in H5Dclose(): not a dataset ID
    major: Invalid arguments to routine
    minor: Inappropriate type
rank=3 closing everything
HDF5-DIAG: Error detected in HDF5 (1.13.0) MPI-process 2:
  #000: ../src/H5D.c line 1156 in H5Dwrite(): can't synchronously write data
    major: Dataset
    minor: Write failed
  #001: ../src/H5D.c line 1096 in H5D__write_api_common(): dset_id is not a dataset ID
    major: Invalid arguments to routine
    minor: Inappropriate type
HDF5-DIAG: Error detected in HDF5 (1.13.0) MPI-process 2rank=2 closing everything
:
  #000: ../src/H5D.c line 472 in H5Dclose(): not a dataset ID
    major: Invalid arguments to routine
    minor: Inappropriate type

Note that --mca io ompio gives identical results.

Ubuntu 20.04.1
openmpi 4.0.3

Filter plugins crashing on with 1.12.0

According to the examples found in the documentation concerning the use of filter plugins, a compression filter has to replace the input buffer by the output buffer. That is shown for the BZIP2 plugin example

204   /* Always replace the input buffer with the output buffer. */  
205   free(*buf); 
206   *buf = outbuf; 
207   *buf_size = outbuflen; 
208   return outdatalen;

The information is got from:

https://confluence.hdfgroup.org/display/HDF5/HDF5+Dynamically+Loaded+Filters

We experience that several plugin filters lead to crashes that can be avoided by not calling free(*buf) but that should introduce a memory leak.

What has changed in HDF5 1.12.0? Is the library also liberating the input buffer therefore leading to a double free and to a crash?

CMake variable HDF5_INSTALL_CMAKE_DIR should be lib/cmake on Windows

On Windows, it looks like HDF5_INSTALL_CMAKE_DIR is set to cmake in

set (${package_prefix}_INSTALL_CMAKE_DIR cmake)

Shouldn't this have the default value ${CMAKE_INSTALL_LIBDIR}/cmake (given that the GNUInstallDirs module is imported) or just lib/cmake?

The issue is that the CMake package files (e.g. hdf5-config.cmake) ends up being installed in %PREFIX%\cmake\hdf5 which is NOT a path considered by the search procedure. Not convenient as downstream projects have to define HDF5_DIR to workaround this issue.

Related to conda-forge/hdf5-feedstock#109

dsets test reading past end of buffer (from PR #3)

So I can confirm that two are fixed, and H5TEST-dsets is not. I debugged it a bit:

H5MM_memcpy is reading past the end of a 32 byte buffer because it's being told to read 256 bytes. The buffer is:

hdf5/test/dsets.c

Line 12185 in 52ac746

const int buffer[8] = {0, 1, 2, 3, 4, 5, 6, 7};

const int buffer[8] = {0, 1, 2, 3, 4, 5, 6, 7};

The backtrace is:

frame #5: 0x0000000101490c5c libhdf5_debug.1000.dylib`H5MM_memcpy(dest=0x00006120004816c8, src=0x00000001000f7120, n=256) at H5MM.c:606:11
frame #6: 0x00000001026ddbd1 libhdf5_debug.1000.dylib`H5VM_memcpyvv(_dst=0x00006120004816c8, dst_max_nseq=1, dst_curr_seq=0x00007ffeefbfbae0, dst_len_arr=0x0000625000005108, dst_off_arr=0x0000625000007908, _src=0x00000001000f7120, src_max_nseq=1, src_curr_seq=0x00007ffeefbfbac0, src_len_arr=0x0000625000000108, src_off_arr=0x0000625000002908) at H5VM.c:1625:13
frame #7: 0x00000001009c0b83 libhdf5_debug.1000.dylib`H5D__compact_writevv(io_info=0x00007ffeefbfc570, dset_max_nseq=1, dset_curr_seq=0x00007ffeefbfbae0, dset_size_arr=0x0000625000005108, dset_offset_arr=0x0000625000007908, mem_max_nseq=1, mem_curr_seq=0x00007ffeefbfbac0, mem_size_arr=0x0000625000000108, mem_offset_arr=0x0000625000002908) at H5Dcompact.c:314:22
frame #8: 0x0000000100b1634a libhdf5_debug.1000.dylib`H5D__select_io(io_info=0x00007ffeefbfc570, elmt_size=4, nelmts=64, file_space=0x0000613000b5fe80, mem_space=0x000061300053afc0) at H5Dselect.c:221:37
frame #9: 0x0000000100b172b3 libhdf5_debug.1000.dylib`H5D__select_write(io_info=0x00007ffeefbfc570, type_info=0x00007ffeefbfd110, nelmts=64, file_space=0x0000613000b5fe80, mem_space=0x000061300053afc0) at H5Dselect.c:310:9
frame #10: 0x00000001008fdb49 libhdf5_debug.1000.dylib`H5D__chunk_write(io_info=0x00007ffeefbfd020, type_info=0x00007ffeefbfd110, nelmts=64, file_space=0x000061300053afc0, mem_space=0x000061300053afc0, fm=0x0000620000000080) at H5Dchunk.c:2730:13
frame #11: 0x0000000100ae05be libhdf5_debug.1000.dylib`H5D__write(dataset=0x0000606000013580, mem_type_id=216172782113784762, mem_space=0x000061300053afc0, file_space=0x000061300053afc0, buf=0x00000001000f7120) at H5Dio.c:531:9
frame #12: 0x000000010265d946 libhdf5_debug.1000.dylib`H5VL__native_dataset_write(obj=0x0000606000013580, mem_type_id=216172782113784762, mem_space_id=0, file_space_id=0, dxpl_id=792633534417207304, buf=0x00000001000f7120, req=0x0000000000000000) at H5VLnative_dataset.c:206:9
frame #13: 0x00000001025aebea libhdf5_debug.1000.dylib`H5VL__dataset_write(obj=0x0000606000013580, cls=0x0000616000000980, mem_type_id=216172782113784762, mem_space_id=0, file_space_id=0, dxpl_id=792633534417207304, buf=0x00000001000f7120, req=0x0000000000000000) at H5VLcallback.c:2082:9
frame #14: 0x00000001025ae050 libhdf5_debug.1000.dylib`H5VL_dataset_write(vol_obj=0x0000603000006640, mem_type_id=216172782113784762, mem_space_id=0, file_space_id=0, dxpl_id=792633534417207304, buf=0x00000001000f7120, req=0x0000000000000000) at H5VLcallback.c:2114:9
frame #15: 0x000000010088d551 libhdf5_debug.1000.dylib`H5D__write_api_common(dset_id=360287970189640115, mem_type_id=216172782113784762, mem_space_id=0, file_space_id=0, dxpl_id=792633534417207304, buf=0x00000001000f7120, token_ptr=0x0000000000000000, _vol_obj_ptr=0x0000000000000000) at H5D.c:1105:9
frame #16: 0x000000010088c561 libhdf5_debug.1000.dylib`H5Dwrite(dset_id=360287970189640115, mem_type_id=216172782113784762, mem_space_id=0, file_space_id=0, dxpl_id=0, buf=0x00000001000f7120) at H5D.c:1154:9
frame #17: 0x00000001000b856b dsets`test_bt2_hdr_fd(env_h5_driver="nomatch", fapl=792633534417208475) at dsets.c:12488:9
frame #18: 0x0000000100007390 dsets`main at dsets.c:15270:29
(lldb) fr sel 6
frame #6: 0x00000001026ddbd1 libhdf5_debug.1000.dylib`H5VM_memcpyvv(_dst=0x00006120004816c8, dst_max_nseq=1, dst_curr_seq=0x00007ffeefbfbae0, dst_len_arr=0x0000625000005108, dst_off_arr=0x0000625000007908, _src=0x00000001000f7120, src_max_nseq=1, src_curr_seq=0x00007ffeefbfbac0, src_len_arr=0x0000625000000108, src_off_arr=0x0000625000002908) at H5VM.c:1625:13
   1622	        acc_len = 0;
   1623	        do {
   1624	            /* Copy data */
-> 1625	            H5MM_memcpy(dst, src, tmp_dst_len);
   1626	
   1627	            /* Accumulate number of bytes copied */
   1628	            acc_len += tmp_dst_len;

(lldb) p tmp_dst_len
(size_t) $11 = 256

(lldb) p tmp_src_len
(size_t) $12 = 256

I gave up at that point, but hopefully it gives you a head start.

FTP server potentially compromised

Hi. Homebrew maintainer here.

When 1.8.22 was initially released, the https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.8/hdf5-1.8.22/src/hdf5-1.8.22.tar.bz2 file had a sha256 of: 0ac77e1c22bce5bbbdb337bd7f97aeb5ef43c727a84ccb6d683d092eb57ebd8e

The file's sha256 is now 689b88c6a5577b05d603541ce900545779c96d62b6f83d3f23f46559b48893a4.

This means that the initial file on you FTP was replaced. I doubt this expected?

Originally posted by @iMichka in #279 (comment)

Code crashes in HDF5-1.12.0 when some processors have no data to write

Hi All,
After updating to HDF5-1.12.0, I got some problem if some processors have no data to write or not necessary to write. Since parallel writing is collective, I cannot disable those processors from writing. For the old version, there seems no such problem. So far, the problem only occurs on Linux using GNU compiler. The same code has no problem using intel compiler or latest gnu compiler on MacOS. An example to reproduce the problem is attached.

I have already included h5sselect_none in the code for those processors without data. But it does not take effect.

!c The following platforms have been tested:
!c Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
!c Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
!c Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
!c Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
!c Centos-7 + Intel2018 + HDF5-12.0 -> Works fine

hdf5_zero_data.F90.zip

Thanks,

Daniel

HDF5 crashes with inefficient compressors

I noticed that HDF5 crashes when I/O filters produce more data than the original dataset size.

When a dataset is created, its declared dimensions + data type are naturally honored when it comes the time to write the data with H5Dwrite. The I/O filter interface, however, allows a compressor to either return a number that’s smaller than that (in which case it successfully compressed the data) or slightly larger (in which case the compressor didn’t do a good job).

Now, let’s say we have a really bad compressor which requires 100x more room than necessary. What I observe is that HDF5 seems to truncate the data, so it’s not possible to retrieve it afterwards. In some cases, HDF5 even crashes when the dataset handle is closed.

Here’s an example I/O filter that reproduces the problem.

// build with 'g++ liberror.cpp -C -o libtestcrash.so -shared -fPIC -Wall -g -ggdb'
#include <hdf5.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <string.h>

extern "C" {

size_t callback(unsigned int flags, size_t cd_nelmts, const unsigned int *cd_values, size_t nbytes, size_t *buf_size, void **buf)
{
    if (flags & H5Z_FLAG_REVERSE) {
        return *buf_size;
    } else {
        char *newbuf = (char *) calloc(1000*1000, sizeof(char));
        free(*buf);
        *buf = newbuf;
        *buf_size = 1000*1000;
        return *buf_size;
    }
}

const H5Z_class2_t H5Z_UDF_FILTER[1] = {{
    H5Z_CLASS_T_VERS, 0x2112, 1, 1, "crash_filter", NULL, NULL, callback,
}};

H5PL_type_t H5PLget_plugin_type(void) { return H5PL_TYPE_FILTER; }
const void *H5PLget_plugin_info(void) { return H5Z_UDF_FILTER; }
}

The corresponding application code is here:

// build with 'g++ mainerror.cpp -o mainerror -g -ggdb -Wall -lhdf5'
// run with 'HDF5_PLUGIN_PATH=$PWD ./mainerror file.h5'
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <hdf5.h>

#define CHECK(hid) if ((hid) < 0) { fprintf(stderr, "failed @line %d\n", __LINE__); exit(1); }

int main(int argc, char **argv)
{
    if (argc != 2) {
        printf("Syntax: %s <file.h5>\n", argv[0]);
        exit(1);
    }
    hsize_t dims[2] = {10, 10};
    hid_t file_id = H5Fopen(argv[1], H5F_ACC_RDWR, H5P_DEFAULT);
    CHECK(file_id);
    hid_t space_id = H5Screate_simple(2, dims, NULL);
    CHECK(space_id);
    hid_t dcpl_id = H5Pcreate(H5P_DATASET_CREATE);
    CHECK(dcpl_id);
    CHECK(H5Pset_filter(dcpl_id, 0x2112, H5Z_FLAG_MANDATORY, 0, NULL));
    CHECK(H5Pset_chunk(dcpl_id, 2, dims));
    hid_t dset_id = H5Dcreate(file_id, "crash_dataset", H5T_STD_I8LE, space_id, H5P_DEFAULT, dcpl_id, H5P_DEFAULT);
    CHECK(dset_id);
    char *data = (char *) calloc(dims[0] * dims[1], sizeof(char));
    CHECK(H5Dwrite(dset_id, H5T_STD_I8LE, H5S_ALL, H5S_ALL, H5P_DEFAULT, data));
    CHECK(H5Dclose(dset_id));
    CHECK(H5Pclose(dcpl_id));
    CHECK(H5Sclose(space_id));
    CHECK(H5Fclose(file_id));
    free(data);
    return 0;
}

If you change the I/O filter code so that it allocates 10x10, or even 100x100, the problem won’t kick in.

t_cache_image hanging on some machines

Date: Fri, 23 Oct 2020 07:28:52 -0600
From: Orion Poplawski [email protected]
To: HDF Helpdesk [email protected]
Subject: t_cache_image hanging on some machines
Parts/Attachments:
1 Shown ~152 lines Text

[ This message was cryptographically signed but the signature could not be verified. ]

When building hdf5 1.10.6 or 1.10.7 for Fedora Rawhide using the Fedora builders, t_cache_image is hanging when run with openmpi
on some architectures (including x86_64). Unfortunately we cannot reproduce it locally and so are reduced in our ability to debug
the issue. Here is the output of the test:

============================
Testing: t_cache_image

        Error ignored
        ============================
        Test log for t_cache_image
        ============================
        ===================================
        Parallel metadata cache image tests
        mpi_size = 6
        ===================================
        Constructing test files:
        writing t_cache_image_00 ... done.
        writing t_cache_image_01 ... done.
        Test file construction complete.
        testfile construction complete – proceeding with tests.
        Testing parallel CI load test – proc0 md write – R/O HDF5-DIAG: Error detected in HDF5 (1.10.7) MPI-process 2:

#000: ../../src/H5D.c line 298 in H5Dopen2(): unable to open dataset
major: Dataset
minor: Can't open object
#1: ../../src/H5Dint.c line 1429 in H5D__open_name(): not found
major: Dataset
minor: Object not found
#2: ../../src/H5Gloc.c line 420 in H5G_loc_find(): can't find object
major: Symbol table
minor: Object not found
#3: ../../src/H5Gtraverse.c line 848 in H5G_traverse(): internal path traversal failed
major: Symbol table
minor: Object not found
#4: ../../src/H5Gtraverse.c line 579 in H5G__traverse_real(): can't look up component
major: Symbol table
minor: Object not found
#5: ../../src/H5Gobj.c line 1118 in H5G__obj_lookup(): can't check for link info message
major: Symbol table
minor: Can't get value
#6: ../../src/H5Gobj.c line 324 in H5G__obj_get_linfo(): unable to read object header
major: Symbol table
minor: Can't get value
#7: ../../src/H5Omessage.c line 873 in H5O_msg_exists(): unable to protect object header
major: Object header
minor: Unable to protect metadata
#8: ../../src/H5Oint.c line 1056 in H5O_protect(): unable to load object header
major: Object header
minor: Unable to protect metadata
#9: ../../src/H5AC.c line 1517 in H5AC_protect(): H5C_protect() failed
major: Object cache
minor: Unable to protect metadata
#10: ../../src/H5C.c line 2378 in H5C_protect(): Can't load cache image
major: Object cache
minor: Unable to load metadata into cache
#11: ../../src/H5Cimage.c line 1164 in H5C__load_cache_image(): Can't reconstruct cache contents from image block
major: Object cache
minor: Unable to decode value
#12: ../../src/H5Cimage.c line 3137 in H5C__reconstruct_cache_contents(): reconstruction of cache entry failed
major: Object cache
minor: Internal error detected
#13: ../../src/H5Cimage.c line 3408 in H5C__reconstruct_cache_entry(): invalid entry size
major: Object cache
minor: Bad value
HDF5-DIAG: Error detected in HDF5 (1.10.7) MPI-process 1:
#000: ../../src/H5D.c line 298 in H5Dopen2(): unable to open dataset
major: Dataset
minor: Can't open object
#1: ../../src/H5Dint.c line 1429 in H5D__open_name(): not found
major: Dataset
minor: Object not found
#2: ../../src/H5Gloc.c line 420 in H5G_loc_find(): can't find object
major: Symbol table
minor: Object not found
#3: ../../src/H5Gtraverse.c line 848 in H5G_traverse(): internal path traversal failed
major: Symbol table
minor: Object not found
#4: ../../src/H5Gtraverse.c line 579 in H5G__traverse_real(): can't look up component
major: Symbol table
minor: Object not found
#5: ../../src/H5Gobj.c line 1118 in H5G__obj_lookup(): can't check for link info message
major: Symbol table
minor: Can't get value
#6: ../../src/H5Gobj.c line 324 in H5G__obj_get_linfo(): unable to read object header
major: Symbol table
minor: Can't get value
#7: ../../src/H5Omessage.c line 873 in H5O_msg_exists(): unable to protect object header
major: Object header
minor: Unable to protect metadata
#8: ../../src/H5Oint.c line 1056 in H5O_protect(): unable to load object header
major: Object header
minor: Unable to protect metadata
#9: ../../src/H5AC.c line 1517 in H5AC_protect(): H5C_protect() failed
major: Object cache
minor: Unable to protect metadata
#10: ../../src/H5C.c line 2378 in H5C_protect(): Can't load cache image
major: Object cache
minor: Unable to load metadata into cache
#11: ../../src/H5Cimage.c line 1164 in H5C__load_cache_image(): Can't reconstruct cache contents from image block
major: Object cache
minor: Unable to decode value
#12: ../../src/H5Cimage.c line 3137 in H5C__reconstruct_cache_contents(): reconstruction of cache entry failed
major: Object cache
minor: Internal error detected
#13: ../../src/H5Cimage.c line 3408 in H5C__reconstruct_cache_entry(): invalid entry size
major: Object cache
minor: Bad value

It would be helpful to know what the developers think of this and what we could do to further debug the issue.


Orion Poplawski
Manager of NWRA Technical Systems 720-772-5637
NWRA, Boulder/CoRA Office FAX: 303-415-9702
3380 Mitchell Lane [email protected]
Boulder, CO 80301 https://www.nwra.com/


It's inside a VM, on an XFS filesystem.

Make use of C++11 `override`

Even if HDF5 doesn't require at least C++11, it could use the override keyword conditionally like this:

#if defined __cplusplus && 201103L <= __cplusplus
#define HDF_OVERRIDE override
#else
#define HDF_OVERRIDE
#endif

In projects that now require C++11, like VTK and ITK, HDF5 headers are now causing lots of compile warnings from, for example, clang's -Wsuggest-override -Wsuggest-destructor-override warnings.

Suppressing these warnings isn't a great idea as override is great for finding mistakes.

I could make a patch if it would be accepted...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.