lazy import of multiprocessing#1190
Conversation
|
It is either this solution of the already merged one. |
|
I do not care which one you take. Please feel free to close without merging. |
|
There are 4 files that configure the generation of a frozen application: Three of these exclude multiprocessing and one includes multiprocessing in the frozen application. |
|
#1185 got merged by the reviewer. FYI our current policy is that the reviewer approved or rejects. And the author merges when approved. |
|
Distributed packages in both cases includes MP because Windows and MacOS has commented
Anyway we need to change freezing and test - that is why i am confused with both approaches of @vasole.
|
|
The current situation in
So in this state
|
|
The situation in this branch:
So in this state
|
I would like to add that all those frozen configurations work. |
So to be clear, it is an explicit choice to have 3 configurations that include |
|
So by comparing the state of the master and this MR in addition with things that need fixing in both cases I would vote for this MR with the addition of using |
Ok. It is unusual because the generic case is that one does not always have the rights to merge. If you have decided that I will of course adhere. The other generic case that I do not see applied in this repository is that the development takes place in a forked repository. For minor changes I can understand the "patch branches" made through the web interface. |
Done. It was contemplated in #1188 |
Also this changed. We mostly use the main repo these days. Some silx examples: |
Yes. @sergey-yaroslavtsev made sure that old and new configuration files were available. |
|
After fixing the typo I think this MR is good to go. I'll let @sergey-yaroslavtsev approve or possible comment on things I missed. |
Ok although I consider it a mistake for two reasons:
|
As I understand we have this in terms of multiprocessing inclusion
grep -r 'excludes.append("multiprocessing")' .
./package/pyinstaller/pyinstaller.spec:excludes.append("multiprocessing")
./package/pyinstaller/pyinstaller_github.spec:# excludes.append("multiprocessing")
./package/cxfreeze/cx_setup_github.py:#excludes.append("multiprocessing")
./package/cxfreeze/cx_setup.py:#excludes.append("multiprocessing")Edit: I messed up yes/no and |
The CI runs only on merge requests and push to master. Not sure how we get less jobs by creating PR's from personal forks. |
Indeed. Just a curious thing -
yes the last version looks correct. |
In the original cx_freeze I would have expected multiprocessing to be excluded as it was the case for PyInstaller but I have checked in v5.9.4 and it was like it is now. It worked because cx_Freeze was only used under windows and that HDF5Utils.py was never called under frozen binary. When you added the HDF5 sorting method to it (before it was inside another module) it would have been broken but the changes of @sergey-yaroslavtsev made it work. When this PR gets merged, all freezing approaches will work again because the lazy import solves the issue created by moving the sorting method to HDF5Utils.py |
Because more immature merge requests are made since testing is not done in a "separate" personal fork. |
If CI uses the secrets - it is not possible to test it in the fork properly. |
|
Did not work. To avoid confusion i will make another PR - run dry-run there. We can substitute the DMG file tomorrow morning from there after test. If you think it is a good idea one can delete the DMG file from current GitHub release. |
|
In this MR we never call So the issue is that we need to do the same in the test, not that we need to call So instead of try:
import multiprocessing
except Exception:
pass
class testHDF5Utils(unittest.TestCase):
@unittest.skipIf("multiprocessing" not in sys.modules, "skipped multiprocessing missing")
def testSegFault(self):
self.assertEqual(_safe_cause_segfault(default=123), 123)we should skip like def _pass_through():
return 0
def _cause_segfault():
import ctypes
i = ctypes.c_char(b"a")
j = ctypes.pointer(i)
c = 0
while True:
j[c] = b"a"
c += 1
class testHDF5Utils(unittest.TestCase):
def testSegFault(self):
# Verify that run_in_subprocess can be used
try:
result = HDF5Utils.run_in_subprocess(_pass_through, default=123)
except Exception:
if not getattr(sys, "frozen", False):
raise
self.skipTest("multiprocessing does not work for the current frozen binary")
self.assertEqual(result, 0)
# Check that run_in_subprocess works as intended when the function segfaults
result = HDF5Utils.run_in_subprocess(_cause_segfault, default=123)
self.assertEqual(result, 123) |
Alternative solution to #1185