Add mp.freeze_support to every entry point, for frozen binaries#1191
Add mp.freeze_support to every entry point, for frozen binaries#1191sergey-yaroslavtsev merged 1 commit intomasterfrom
Conversation
|
Python 3.9 (minimum version) first time fails with simple restart of the CI job was enough to succeed. |
What issue exactly? An issue with the unit test If it is an issue with the unit test If it is an issue with one of the PyMca applications: can we see a traceback or error? I would be surprised because |
That when opening an HDF5 file the entries in a frozen MacOS binary are not shown. The problem is not really the test suite itself (although I have sent #1192 but that will not solve anything ) The only information we get is:
We have two solutions:
|
|
I would vote for accepting this PR on the philosophy that even if we do not try to use multiprocessing, the module is a standard module and some dependency might use it. This PR anticipates it. |
|
It is not necessary to do it for all the modules including a |
Fine with me. It does not change anything for people not using the MacOS frozen binary and it has not been uploaded to sourceforge yet. |
|
I can confirm the dry-run mentioned above works under MacOS BigSur. |
|
Dry-run seems to work. HDF5 tree is visible, I rerun tests through interactive console (114 tests, 4 skipped) - it is for both MacOS and Windows, just to be sure.
If so then only necessary for MacOS - 5.9.4 had MP for Windows without user complains.
I would prefer this one.
Done. |
p.start()
try:
p.join()
try:
return queue.get(block=False)
except Empty:
return default
finally:
try:
p.kill()
except AttributeError:
p.terminate()that is why #1190 did not work since there is no error raised. Thus, exception in: def safe_hdf5_group_keys(file_path, data_path=None):
try:
return run_in_subprocess(
get_hdf5_group_keys, file_path, data_path=data_path, default=list()
)
except Exception:
_logger.warning("run_in_subprocess not available")
return get_hdf5_group_keys(file_path, data_path)could not be reached. If we want to try to get back to logic of #1190 it is the thing to be modified. However, potential issue is that it is not clear when and why it fails to work properly for one combination of OS+freezing procedure+yes/no mp.freeze_support and not for another. Update: |
|
Ok after talking to @sergey-yaroslavtsev I finally understood that all this works fine import multiprocessing
ctx = multiprocessing.get_context(context)
queue = ctx.Queue(maxsize=1)
p = ctx.Process(
target=subprocess_main,
args=(queue, target) + args,
kwargs=kwargs,
)
p.start()
p.join()I was convinced any of these things would be failing and then we capture this in return queue.get(block=False)We capture that |
|
The thing is that Instead we now add |
|
Actually, def safe_hdf5_group_keys(file_path, data_path=None):
try:
return run_in_subprocess(
get_hdf5_group_keys, file_path, data_path=data_path
)
except Exception:
_logger.warning("run_in_subprocess not available, multiprocessing is not imported or not protected.")
try:
return get_hdf5_group_keys(file_path, data_path=data_path)
# avoid crushing the main process
except Exception:
_logger.warning("Failed to get HDF5 group keys for %s", file_path)
return list()
def run_in_subprocess(target, *args, context=None, **kwargs):
import multiprocessing
ctx = multiprocessing.get_context(context)
queue = ctx.Queue(maxsize=1)
p = ctx.Process(
target=subprocess_main,
args=(queue, target) + args,
kwargs=kwargs,
)
p.start()
p.join()
# check if the subprocess exited with an error
if p.exitcode != 0:
raise RuntimeError(f"Subprocess failed with exit code {p.exitcode}")
try:
return queue.get(block=False)
# subprocess succeeded but did not return a result
except Empty:
raise RuntimeError("Subprocess did not return a result")No matter if we add Am i missing smthg? Update:
|
No, join can definitely fail |
Yes but we have HDF5Utils: either
And if start fails then we immediately fallback to direct method |
|
I guess this PR combined with skipping the the test in frozen code independently of the presence of multiprocessing or not is the simplest way to move forward. The safe reading of HDF5 files while the files are being written is unlikely to be a concern when using frozen binaries. |
|
This pattern ensures that whatever happens after p.start()
try:
p.join()
...
finally:
try:
p.kill()
except AttributeError:
p.terminate()I don't see a need to change that pattern. |
|
It was merged as accepted because it is better than current state. And to be able to test other branches. |

Bring back parts removed in #1190