Our computational setup is via a Docker image on an AWS instance, where all my processing is run within a NFS directory. I'm not sure if this is similar to a mounted SMB volume like in #31. This image is running Ubuntu 18.04. Let me know if you have any advice with this. The error I receive after ctrl+c is pasted below:
12-04 15:50:14, INFO Noise Level:0.0
^CTraceback (most recent call last):
File "/common/workdir/IsoNet/bin/isonet.py", line 497, in <module>
fire.Fire(ISONET)
File "/cloud-home/X/.magellan/conda/envs/isonet/lib/python3.10/site-packages/fire/core.py", line 135, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/cloud-home/X/.magellan/conda/envs/isonet/lib/python3.10/site-packages/fire/core.py", line 468, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/cloud-home/X/.magellan/conda/envs/isonet/lib/python3.10/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/common/workdir/IsoNet/bin/isonet.py", line 391, in refine
run(d_args)
File "/common/workdir/IsoNet/bin/refine.py", line 115, in run
get_cubes_list(args)
File "/common/workdir/IsoNet/preprocessing/prepare.py", line 184, in get_cubes_list
p.map(func,inp)
File "/cloud-home/X/.magellan/conda/envs/isonet/lib/python3.10/multiprocessing/pool.py", line 367, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/cloud-home/X/.magellan/conda/envs/isonet/lib/python3.10/multiprocessing/pool.py", line 768, in get
self.wait(timeout)
File "/cloud-home/X/.magellan/conda/envs/isonet/lib/python3.10/multiprocessing/pool.py", line 765, in wait
self._event.wait(timeout)
File "/cloud-home/X/.magellan/conda/envs/isonet/lib/python3.10/threading.py", line 607, in wait
signaled = self._cond.wait(timeout)
File "/cloud-home/X/.magellan/conda/envs/isonet/lib/python3.10/threading.py", line 320, in wait
waiter.acquire()
KeyboardInterrupt
Hello,
I'm running into an issue during refine where it's getting stuck in the
prepare.pyphase in the functionget_cubes_list. I felt this problem was similar to #31 so I tried running with--preprocessing_ncpus 1and it made it through the subtomogram preparation. I'm wondering if there's any way to overcome this issue?Our computational setup is via a Docker image on an AWS instance, where all my processing is run within a NFS directory. I'm not sure if this is similar to a mounted SMB volume like in #31. This image is running Ubuntu 18.04. Let me know if you have any advice with this. The error I receive after ctrl+c is pasted below: