-
Notifications
You must be signed in to change notification settings - Fork 23
Open
Labels
Description
Very quick issue with varying things that came up during PR #66; apologies for the messy report. See PR #81 for initial (failing) tests.
n_jobs
LeafletFinder with n_jobs == 2 does not pass tests, see #66 (comment)
- Currently XFAIL in update to dask 0.18.0 #66 and master
distributed
LeafletFinder with scheduler as distributed.client fails , see also started PR #81.
______________________________________________ TestLeafLet.test_leaflet_single_frame[distributed-2-1] _______________________________________________
self = <test_leaflet.TestLeafLet object at 0xd26ea5fd0>, u_one_frame = <Universe with 5040 atoms>
correct_values_single_frame = [array([ 1, 13, 25, 37, 49, 61, 73, 85, 97, 109, 121,
133, 145, 157, 169, 181, 193, ..., 4477, 4489,
4501, 4513, 4525, 4537, 4549, 4561, 4573, 4585, 4597, 4609, 4621,
4633, 4645, 4657, 4669])]
n_jobs = 1, scheduler = <Client: scheduler='tcp://127.0.0.1:56156' processes=2 cores=4>
@pytest.mark.parametrize('n_jobs', (-1, 1, 2))
def test_leaflet_single_frame(self,
u_one_frame,
correct_values_single_frame,
n_jobs,
scheduler):
lipid_heads = u_one_frame.select_atoms("name PO4")
u_one_frame.trajectory.rewind()
leaflets = leaflet.LeafletFinder(u_one_frame,
lipid_heads).run(start=0, stop=1,
n_jobs=n_jobs,
> scheduler=scheduler)
/Volumes/Data/oliver/Biop/Projects/Methods/MDAnalysis/pmda/pmda/test/test_leaflet.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/Volumes/Data/oliver/Biop/Projects/Methods/MDAnalysis/pmda/pmda/leaflet.py:295: in run
cutoff=cutoff)
/Volumes/Data/oliver/Biop/Projects/Methods/MDAnalysis/pmda/pmda/leaflet.py:205: in _single_frame
Components = parAtomsMap.compute(**scheduler_kwargs)
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/dask/base.py:155: in compute
(result,) = compute(self, traverse=False, **kwargs)
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/dask/base.py:392: in compute
results = schedule(dsk, keys, **kwargs)
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/distributed/client.py:2308: in get
direct=direct)
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/distributed/client.py:1647: in gather
asynchronous=asynchronous)
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/distributed/client.py:665: in sync
return sync(self.loop, func, *args, **kwargs)
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/distributed/utils.py:277: in sync
six.reraise(*error[0])
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/six.py:693: in reraise
raise value
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/distributed/utils.py:262: in f
result[0] = yield future
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/tornado/gen.py:1099: in run
value = future.result()
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/tornado/gen.py:1107: in run
yielded = self.gen.throw(*exc_info)
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/distributed/client.py:1492: in _gather
traceback)
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/six.py:692: in reraise
raise value.with_traceback(tb)
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/distributed/protocol/pickle.py:59: in loads
return pickle.loads(x)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> for k in _ANCHOR_UNIVERSES.keys()])))
E RuntimeError: Couldn't find a suitable Universe to unpickle AtomGroup onto with Universe hash 'f065a285-b5d1-44db-a2e9-c1de8b73c716'. Available hashes:
/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/MDAnalysis/core/groups.py:127: RuntimeError
--------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------
distributed.worker - WARNING - Could not deserialize task
Traceback (most recent call last):
File "/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/MDAnalysis/core/groups.py", line 119, in _unpickle
u = _ANCHOR_UNIVERSES[uhash]
File "/Users/oliver/anaconda3/envs/pmda/lib/python3.6/weakref.py", line 137, in __getitem__
o = self.data[key]()
KeyError: UUID('f065a285-b5d1-44db-a2e9-c1de8b73c716')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/distributed/worker.py", line 1387, in add_task
self.tasks[key] = _deserialize(function, args, kwargs, task)
File "/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/distributed/worker.py", line 801, in _deserialize
function = pickle.loads(function)
File "/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/distributed/protocol/pickle.py", line 59, in loads
return pickle.loads(x)
File "/Users/oliver/anaconda3/envs/pmda/lib/python3.6/site-packages/MDAnalysis/core/groups.py", line 127, in _unpickle
for k in _ANCHOR_UNIVERSES.keys()])))
RuntimeError: Couldn't find a suitable Universe to unpickle AtomGroup onto with Universe hash 'f065a285-b5d1-44db-a2e9-c1de8b73c716'. Available hashes:
(complete error message from pytest)