Conversation
…ersion of is_valid_enum which defers the loading of data to occurr only when required, most h5py.Datasets do not qualify as an enum. This change could also in future version of the template-base validation be added. That is not expected to be so performance improving as in the case of template based validation prior to NeXus file writing all data are already in main memory. For HDF5 though they need to loaded first from disk, so far the clean_str_attr(dataset[()]) did that ALWAYS and for all datasets irrespective their size, just to discard most of these data with the next two lines when the code figures the dataset is not an instance of an enum concept
… resolving issues for em
…initial round of tests with float32 there validation is now blazing fast for all three cases (contiguous, chunked uncompressed, chunked compressed), the substantial speedup is not a signature of failure or bypassing certain function calls but expected, in the contiguous float32 test case the concept entry1/measurement/event1/image1/real is float32 yes its 10GB uncompressed payload. But keep in mind that h5py and HDF5 store datasets of elementary datatypes always as type-homogeneous arrays, that means a test if the field is valid reduces to a simple test of the dtype arribute against accepted ones, no need to load the data at all, unless we have NX_POSINT as datatype. Similar expected speedup signified alsso compared to v0.13.2 is valid enum, in that version the data were always fully loaded first just to become, in most cases, immediately discarded inside the function call as the node did not encode an enum type concept, this validation feature branch and commit here tested though check FIRST if the node is an enum, thus we can completely skip the loading, thus bringing essentially the speedup. We expect that we should test again with a concept of type NX_POSINT and our three storing schemes contiguous, chunked compressed and uncompressed, here we would expect to get a costly load of all data when contiguous and a chunk-by-chunk positivity integer test during the is valid field hdf evaluation, but the base line should remain flat. Next-up, i) implement this test with posint and run again, ii) fix pytest from pynxtools, iii) move all testing code to proper tests, mark this feature branch as ready for review, i.e. move from draft to ready
…sed in the code to work on datasets and non-string payload in which case type annotations were imprecise
| "cwd": "${workspaceFolder}/..", | ||
| "justMyCode": false, | ||
| "args": [ | ||
| "chunked_uncompressed.nxs", |
There was a problem hiding this comment.
reduce to one example
rettigl
left a comment
There was a problem hiding this comment.
I appreciate the effort, but for me this PR includes too many changes to properly review. The connection of many of these changes to the issue at hand (large memory consumption) is not clear to me. I suggest to split it up into several PRs.
Also, it changes the behavior of the validator by removing the acceptance of ints as floats, which we explicitly decided to allow at some point, as far as I remember.
| isinstance(v.decode() if isinstance(v, bytes) else v, tuple(accepted_types)) | ||
| for v in np.asarray(hdf_node[...]).flat | ||
| ) | ||
|
|
There was a problem hiding this comment.
Not sure if this will work for any type of object data.
There was a problem hiding this comment.
We should not support parsing of arbitrary object data, to be more precise any type of object data i.e. also C struct byte blobs.
I support that we allow string data even if these use the already involved a dozen formatting combinations, h5py reports as variable null-terminated utf8 but also other combinations are valid strings like fixed length space terminated ascii, agree that there should be examples for it, but if that PR is anyway too perceived as too large than that is for sure another PR.
| else: | ||
| return False | ||
| """ | ||
| return True |
There was a problem hiding this comment.
Would move this into the first if clause for better readability
| def is_valid_data_field_hdf(hdf_node: h5py.Dataset, nxdl_type: str, path: str): | ||
| """Checks whether value of hdf_node is valid according to the type defined in the NXDL.""" |
There was a problem hiding this comment.
Why is this needed now, was not there before, and how is it related to the memory optimization?
There was a problem hiding this comment.
It is true that the PR started from memory optimization but adding a quick fix only focused on this would have for sure faced backslash as too sloppy. Therefore, a thorough revision of the function calls and reorganization.
Before we had is_valid_data_field which makes immediately calls for fully unpacking all data in memory,
The validation implementation so far, rightly so reused most functions that are used also in the validate_dict, all functions for which memory optimization was never an issue.
I agree there is some functional duplication in the is_valid_data_field and is_valid_enum and their *_hdf variants, maybe making value respective hdf_node a parameter input of several types and then internal type distinction would is an alternative.
| ) | ||
|
|
||
|
|
||
| def is_valid_enum_hdf( |
There was a problem hiding this comment.
Same as above, how is this connected to the mem optimization?
There was a problem hiding this comment.
Same comment as above, its a matter what of what we want, large complex functions that deal with a multitude of use cases or specialized functions that still keep the amount of code duplication as small as possible, luckily we are using Python, so no need for data-type-specific template programming
|
|
||
| with File(file, "r") as h5file: | ||
| for entry, nxdl in def_map.items(): | ||
| for entry, nxdl in sorted(def_map.items()): |
There was a problem hiding this comment.
Why this change? Is it related to the issue?
There was a problem hiding this comment.
It is related to issues I observed with non-deterministic behavior so I went hard through the code to inspect each case where implicit assumptions about a certain order is just working automagically is unsubstantiated, even for HDF5 the sequence how attributes are iterated over can be version dependent given we have the low and high level backend (hdf5, h5py) involved, so this here and changes in some other places we should inspect hard if order is really always correct and the way we wish to. It might be the sorted call here is redundant then I take this more as a note for us to check.
Fact is the old implementation indeed is vulnerable to order effects if best namefit finds more than one concept with the same highest score. So something which started as a pragmatic PR for memory optimization, indeed demanded a closer inspection.
| pytest.param( | ||
| ["src/pynxtools/data/201805_WSe2_arpes.nxs"], | ||
| [ | ||
| "The value at /entry/collection_time should be one of the following Python types: (<class 'float'>, <class 'numpy.floating'>), as defined in the NXDL as NX_FLOAT.", |
There was a problem hiding this comment.
See above, should be auto converted
| "Field /entry/instrument/analyser/projection has no documentation.", | ||
| "Field /entry/instrument/analyser/sensor_count has no documentation.", | ||
| "Field /entry/instrument/analyser/working_distance has no documentation.", | ||
| "The value at /entry/instrument/beam_probe_0/distance should be one of the following Python types: (<class 'float'>, <class 'numpy.floating'>), as defined in the NXDL as NX_FLOAT.", |
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
| # | ||
| """Test cases for the Writer class used by the DataConverter""" |
There was a problem hiding this comment.
What is being tested here for?
There was a problem hiding this comment.
All code in test_validation_opt we can move eventually to test_validation
| def test_validate_file_storage_layouts( | ||
| storage_layout, tmp_path, caplog, expected_warnings | ||
| ): | ||
| """Test validation of a NeXus/HDF5 with the same content but different storage layout.""" |
There was a problem hiding this comment.
Why not integrate this in the general validation tests?
| with caplog.at_level(logging.INFO): | ||
| observed_infos = [ | ||
| r.getMessage() for r in caplog.records if r.levelno == logging.INFO | ||
| rec.getMessage() for rec in caplog.records if rec.levelno == logging.INFO |
There was a problem hiding this comment.
We use inconsistently r and rec throughout our code base, yes that could go into an own PR, or just be fixed here as we go.
Thank you @rettigl Wrt to the last point, there is a good reason why in NeXus we distinguish NX_NUMBER, NX_(U,POS)INT, and NX_FLOAT. I do not agree that this decision in the past was a good one. IMHO a validator should check a report where there are inconsistencies it should not for convenience purposes do conversion of types. This should be the pynxtools-readers responsibility to feed to pynxtools straightaway. I can see if people may be fine with just using the default precision of data types, like python inbuild float, int but I disagree with cross-accepting what are different elementary data types. Not only for the fact that currently our implementation then would also need to check if really every precision of an int is exactly mappable on a floating with certain precision. |
Planned for
v0.14.0The other aspect of #737:
For HDF5 file base
validatewe wish to reduce the number of cases when large h5py.Dataset payload needs to be loaded into main memory, especially when using chunked storage layout that would allow iterating over chunks.Motivation was usage of the standalone HDF validator on a 32GiB file which has essentially only one dataset (a large image stack uncompressed approx 32GiB payload) from CENEM on a 128GiB RAM workstation. Despite that image stack is stored chunked, all data gets loaded somewhen and eventually generating multiple copies in the process into main memory, causing the validation process to terminated after trying to allocate past all the memory available on the workstation.
Keys benefits: