Skip to content

Releases: predsci/psi-io

v2.0.7

21 Apr 21:45

Choose a tag to compare

PATCH NOTES

  • Include the ability to attach attributes to Datasets in write routines.
    • These key-value pairs can be passed through as **kwargs in the write_hdf_data routine (and the relevant functions that dispatch to this base writer).
  • Add the convert and convert_psih4_to_psih5 for converting between HDF versions.
    • The former copies the exact data/attributes from one version to another.
    • The latter enforces PSI conventions when converting between HDF4 and HDF5.
  • This release has significantly updated the API documentation and added examples for writing/convert HDF files.

v2.0.6

06 Mar 18:50

Choose a tag to compare

PATCH NOTES

  • Included sync_dtype functionality to HDF writers.
    • When True, each scale is cast to the datatype of the corresponding Dataset.
    • This behavior is necessary for certain Fortran tools within the PSI software ecosystem.
      • The internal logic of such tools rely on uniform precision between a dataset and its coordinates.
  • By default, wrhdf_1d wrhdf_2d and wrhdf_3d default to datatype synchronization, while the more general write_hdf_data routine does not.
    • This design choice was made to mimic legacy PSI HDF writers, while also allowing newer write-routines to be flexible in their datatype protocols.

v2.0.5

10 Feb 23:38

Choose a tag to compare

PATCH NOTES

  • Minor patch to pin h5py version
    • h5py is now pinned to be >=3.8
    • Version 3.8 introduced the Dataset.is_scale property, used throughout the HDF5 dispatch methods in psi_io.
      • NOTE: If using the standard PSI Conda Recipe, be sure to update your h5py version to accommodate this recent change.
      • If you identify any other dependency conflicts, please submit these to the Issue Tracker

v2.0.4

30 Jan 20:13

Choose a tag to compare

PATCH NOTES

  • Added new write_hdf_data function:
    • Designed to handle both HDF4 and HDF5 writing for canonical "PSI-style" datasets.
    • Extends previous writers' functionality, allowing one to write datasets with no scales (or scales attached to arbitrary dataset dimensions) with non-canonical dataset identifiers e.g. allowing one to convert CHMap Database files from HDF5 to HDF4 or vice versa.
    • Resolves issues with HDF4 scale writing; input datatypes are now preserved (whereas previous writers upcast to float64 to account for issues with SWIG bindings).
    • Enforces datatype inputs across writers.
      • NOTE: pyhdf does not support the use of float16 and int64 datatypes.
  • Refactored legacy writers to call write_hdf_data:
    • Existing writers' API signatures – wrhdf_1d, wrhdf_2d, and wrhdf_3d – remain unchanged to allow for backwards compatibility.
    • wrhdf_1d, wrhdf_2d, and wrhdf_3d(functionally) are calls to write_hdf_data using the default "PSI-style" dataset identifiers, along with a dimensionality enforcement check.
      • NOTE: due to this refactor, the legacy writers no longer perform upcasting on coordinate variables – see notes for the new write_hdf_data function above.

v2.0.2

28 Jan 18:33

Choose a tag to compare

MAJOR RELEASE

  • Unified the API for reading PSI's HDF4 and HDF5 files. The unified reading/writing routines also handle files with non-PSI-standard data structures (and have resolved some of the idiosyncrasies with HDF4 dimension datatypes).
  • Completed documentation of the psi-io package API and added some simple examples.
  • Added a comprehensive test suite that tests both HDF4 and HDF5 versions of the routines.

Breaking Changes

  • All of the "newer" read/writing/interpolation routines take – as their first positional argument – the ifile parameter (Path or str). Therefore, when integrating these changes over version 1.0, your "find and replace" approaches should look for:

    • read_hdf_meta
    • read_rtp_meta
    • read_hdf_by_index
    • read_hdf_by_value
    • np_interpolate_slice_from_hdf
    • sp_interpolate_slice_from_hdf
    • interpolate_positions_from_hdf
  • The np_interpolate_slice_from_hdf function now returns the data array in the proper Fortran order (to be consistent with all of the other routines in the package).

Other notes

  • Usage of the "classic" readers rdhdf_1d, rdhdf_2d, rdhdf_3d helper functions remains the same, but they are now wrappers for the newer read_hdf_data.
  • The new unified writer (write_hdf_data) can handle a more diverse range of datatypes for HDF4 (along with saving non-standard datasets e.g. from the Coronal Hole Map database). The "classic" writers wrhdf_1d, wrhdf_2d, wrhdf_3d remain the same for now.

v1.0.0 (initial version)

26 Jun 19:40

Choose a tag to compare

Initial consolidation of our HDF reading routines into a single pip-installable Python package, which is now available as psi-io on PyPi.

Users familiar with psihdf.py or psi_io.py should be able to use this as a seamless drop-in replacement, e.g.:

# if you used to use the standalone psi_io.py:
import psi_io

# if you used to use the standalone psihdf.py and don't want to edit existing code:
import psi_io as psihdf

The package was designed to not require HDF4, but if pyhdf is present, .hdf files can be read no problem.